There are eight lessons and a Final Project in this course. I will reveal Lessons 1 - 8 over the course of the 10-week class to ensure that we are all working through the content as a group and to make sure all of the content is up to date. The Final Project will be revealed during or before Week 7. Be sure to skim it over so you have an idea of where we will end up. You do NOT need to start the Final Project until Week 9. Of course - I'm always happy to discuss Final Project ideas whenever you like.
As we begin the course, it is important to delve into the unique aspects or distinctive characteristics of employing geospatial technology in the context of environmental challenges. The term 'environmental' can encompass varied interpretations for different individuals, and even within the realm of GIS, there exist diverse perspectives and various ways to frame it. Within this lesson, we will strive to clarify this area of application by introducing environmental concepts, exploring three instances of environmental challenges, and contemplating the role played by GIS and other geospatial technology in addressing these challenges.
At the successful completion of Lesson 1, you will have:
If you have questions now or at any point during this lesson, please feel free to post them in the Lesson 1 Discussion.
This lesson is one week in length and is worth a total of 100 points. Please refer to the Course Calendar for specific time frames and due dates. To finish this lesson, you must complete the activities listed below. You may find it useful to print this page first so that you can follow along with the directions. Simply click the arrow to navigate through the lesson and complete the activities in the order that they are displayed.
SDG image retrieved from the United Nations [1]
Environmental geospatial technology applications typically manage a physical system involving its land, water, air, and biota. An interesting question to ask is, “for whom are we managing this environment?” We can divide environmental applications of geospatial technology and data into two broad categories within this line of inquiry: 1) managing the environment to protect ecosystem services that humans rely on, and 2) managing the environment for its own sake and protecting the wildlife that lives there.
One way to distinguish these two scenarios is by using the labels conservation and preservation. Conservation is the management of natural resources that we, humans, use so that they are available today and will be available to us in the future. These ecosystem services include managing the environment to protect sources of clean drinking water, vegetation that prevents erosion and filters the air, landscapes that have healthy soil that will continue to support agriculture and food production, and bolstering insect populations, like bees, which are required to sustain plants and food we eat. On the other hand, preservation is the management of habitats and natural areas so that they are unmolested by human activity and allowed to operate according to their natural processes and support wildlife for its own sake. In many cases, it may appear that we are protecting the environment for its own sake when, in fact, we are protecting the ecosystem services that benefit humans. This begs the question of whether all of our management activities target ecosystem services.
The Millennium Ecosystem Assessment (2005) [2] defines ecosystem services this way and illustrates the four categories in Figure 1:
Ecosystem services are the benefits people obtain from ecosystems. These include provisioning services such as food, water, timber, and fiber; regulating services that affect climate, floods, disease, wastes, and water quality; cultural services that provide recreational, aesthetic, and spiritual benefits; and supporting services such as soil formation, photosynthesis, and nutrient cycling. The human species, while buffered against environmental changes by culture and technology, is fundamentally dependent on the flow of ecosystem services.
There is some overlap between the conservation and preservation approaches to environmental management, but it’s useful to be cognizant of this distinction when performing analyses so we have a clear understanding of our end goal and what our results are intended to inform.
Let’s narrow our focus on environmental management from the broad concepts of conservation and preservation and contemplate some specific themes to which GIS and geospatial technology could be applied. Some that come to mind are:
I see these application areas as likely use cases for geospatial technology within an environmental context. These themes overlap with many disciplines like medicine, engineering, biology, and chemistry, which makes defining “environmental” challenging. There are environmental aspects to all of these themes, and geospatial technology is well-suited to many of them. So, what is it about these themes that make them well-suited to use geospatial technology? How are we using geospatial technology in these contexts that are unique relative to other geospatial applications? Ultimately, how can we evaluate environmental challenges in spatial data science?”
To help answer these questions, this course presents environmental challenges and engages analysis and evaluation methods with projects that are more representative of what you might encounter in the field as an environmental analyst of some sort. As I think about the question of what 'environmental' is, I break it down into a few categories that can be useful in defining it. Consider the characteristics of each of the following prompts in the context of environmental applications:
I can imagine instances where environmental applications of geospatial technology stand apart from other projects in each of these categories. An outcome of Lesson 1 is to identify how environmental geospatial applications are unique by digesting some background material and having a discussion about it. In the next section, we will investigate three particular use cases of environmental geospatial applications to help frame our discussion.
Millennium Ecosystem Assessment, 2005. Ecosystems and Human Well-being: Synthesis [3]. Island Press, Washington, DC.
UN Environment (2019). Global Environment Outlook – GEO-6: Healthy Planet, Healthy People. Nairobi, Kenya. University Printing House [4], Cambridge, United Kingdom.
The deliverable for this week is a discussion about the role of geospatial technology and spatial data in environmental management. To facilitate that discussion, I present three well-suited applications for geospatial technology utilization. Take a look at the resources provided here, and feel free to extend your search beyond these links to get an idea of what these use cases entail and how geospatial technology and spatial data fit into the process. I've chosen these three applications to try to represent different types of environmental work: a large construction project, municipal waste management, and wildfire and resource management. They are each what we would consider "environmental challenges." Still, each has a different purpose and context that range from broad-scale government regulation to local-scale engineering to applied science. Think about any similarities or differences among these examples as you explore them.
In 1970, the National Environmental Policy Act [5] and the Environmental Protection Agency [6] were created to formalize attention on the environmental impacts of other decisions and projects. Browse their websites and any other resources you discover to research what NEPA and the EPA are all about. Specifically, look at what their missions are.
A key component of NEPA is the requirement for certain projects to develop an Environmental Impact statement (EIS) that details the potential consequences of the project's implementation on the physical environment. EISs, therefore, are essentially thorough analyses of large construction projects and how they might interact with all sorts of physical and biological systems. These statements have tremendous potential for geospatial technology application due to the spatially-explicit nature of the large projects that require an EIS. The official specification for Environmental Impact Statements [7] can be found in the Federal Code of Regulations. Check out sections 1502.1, 1502.15, and 1502.16, which provide some insights into why EISs are required and what they should include.
To view a completed EIS, all of which are public records, you can search for one on the EPA website [8]. To help you see a final EIS, I've downloaded one for a couple of wind farm projects:
Other related documents that you might find interesting are: the GPWF Record of Decision [11], which details how or if the project will proceed, and a video that describes the completed Grand Praire Wind Farm project [12] and a video that describes
The University Area Joint Authority (UAJA) [13] manages the wastewater treatment for the municipalities of State College and the surrounding region. It is a traditional municipal sewage treatment facility that is responsible for the transport of sewage into the facility and the disposal of residual waste and water. Some of the facility's output enters local waterways directly, and other outputs are reused in agricultural settings, an initiative they call "beneficial reuse alternatives." Additionally, the UAJA has addressed other environmental impacts, such as an issue with odors in the nearby neighborhoods.
These two activities, beneficial reuse and odor control, provide opportunities for geospatial analysis. UAJA produced a report describing their plans for alternative uses of treated wastewater [14]. You will see sections about different options, including urban reuse, agricultural irrigation, and direct injection, and the potential impacts of these plans on drinking water quality and water temperature. UAJA also shared findings from an odor study [15] performed in response to complaints from residents living near the treatment facility. The study sampled odor levels in various locations surrounding the facility and identified possible sources of the nuisance smells. Efforts to control the odors require spatial data showing where issues currently occur, where they originate, and how they are transported via wind, etc. Much of the sample data was collected using "human sensory testing" via an observation form [16]. The form is interesting both for the fun of seeing how odors are classified and, more importantly, how the location of each observation was recorded, which has implications for how the spatial data must be processed for use in a GIS. This is a form [17] that citizens can submit to record an odor observation.
Wildfires stand as one of the most catastrophic natural disasters, impacting millions of acres burned and countless ecosystems globally each year. Their consequences extend to endangering human well-being, biodiversity, climate stability, and socio-economic progress. In order to avert, control, and alleviate the ramifications of these fast-moving fires, dependable and prompt data regarding fire frequency, behavior, and repercussions are imperative for researchers and decision-makers. In this context, geospatial technology and spatial data emerge as potent instruments capable of furnishing vital insights.
NASA's Fire Information for Resource Management System [18], or FIRMS [18], is a tool that provides data about active fires and thermal anomalies or hot spots. As outlined on the website, the focus and objectives of FIRMS include "providing quality resources for fire data on demand, working with end users to enhance critical applications, assisting global organizations in fire analysis efforts, delivering effective data presentation and management." The University of Maryland originally developed the system using funding from NASA's Applied Sciences Program and the United Nations Food and Agriculture Organization (UN FAO). FIRMS migrated to NASA's LANCE (Land, Atmosphere Near real-time Capability for EOS) in 2012.
Real-time fire detections in the U.S. and Canada are viewable online at FIRMS US/Canada Fire Map [19], and global fire detections are viewable online at FIRMS Global [20] within 3 hours of satellite observation. The active fire data is also downloadable [21] in various formats, including shapefiles and KML files. FIRMS uses satellite observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Visible Infrared Imaging Radiometer Suite (VIIRS) instruments to detect, verify, and track active fires and thermal anomalies or hot spots. The information is considered to be delivered in near real-time (NRT) to decision-makers through alerts, analysis-ready data, online maps, and web services.
For additional information, check out the AGO StoryMap: FIRMS: Fire Information for Resource Management System Managing Wildfires with Satellite Data [22]. Also, data about air quality during wildfires can be found at AirNow [23]. The Fire and Smoke Map [23] reports information about wildfire smoke and air quality information using the official U.S. Air Quality Index (AQI) for more than 500 cities across the U.S. and Canada. Try viewing the full extent of North America as well as zooming in on your city or region.
Answering the question, "What are environmental applications of geospatial technology?" is perhaps more complicated than it first seems. This is due in part to the diversity of application areas, purposes, and audiences for geospatial analysis projects that deal with spatial data and the physical environment. This discussion activity is our opportunity to start engaging in environmental geospatial technology by talking about what it is before we get into the nuts and bolts of how we commonly implement it in later lessons.
First, read through the three scenarios on the previous page and think about how each of them represents an environmental application of geospatial technology.
In Lesson 1, we explored the definition of what environmental applications of geospatial technology are. In Lesson 2, we will start investigating available spatial data and ways to share maps with our audience.
Lesson 1 is worth a total of 100 points.
If you have anything you'd like to comment on or add to the lesson materials, feel free to post your thoughts in the Lesson 1 Discussion. For example, what did you have the most trouble with in this lesson? Was there anything useful here that you'd like to try in your own workplace?
One of the first steps in any geospatial project is to find relevant datasets that you will use either to create maps, conduct an analysis, or both. For example, many projects in the environmental field require a site assessment with an inventory of natural features and concerns or related opportunities. This information is necessary to create management plans for conservation and recreation areas, environmental impact assessments for development projects, future land use plans, zoning, and parks and recreation plans for city planning, to screen and rank potential sites for various uses, and to plan for field data collection events.
In the U.S., most of this information is readily available on the Internet from various federal and state agencies. For example, the United States Geological Survey (USGS) [24], the U.S. Department of Agriculture (USDA) [25], the National Renewable Energy Laboratory (NREL) [26], the U.S. Forest Service (USFS) [27], the U.S. Fish & Wildlife Service (USFWS) [28], the U.S. Environmental Protection Agency (USEPA) [29], the National Oceanic and Atmospheric Administration (NOAA [30]), the Federal Emergency Management Agency (FEMA) [31], and the U.S. Census Bureau [32] provide geospatial datasets related to elevation, soil types, current and historical land use/land cover, wetland inventories, hydrological features, wildlife inventories, habitat assessments, invasive species, proximity or susceptibility to pollution, climate, energy potential, and risks of fires, drought, flooding, and demographic data. Also, many datasets are available within ArcGIS Online as data services.
GIS and geospatial technology make it very easy to combine this information into one place, analyze it, and create maps to communicate the information to interested parties. Before we can start analyzing data, we need to know where to find it and how to work with it. We are going to explore several providers of environmental data to create a series of natural features maps. We will explore each organization’s website to find which data sets they offer and information about each data set (metadata). We will also explore different methods to view each dataset, including interactive mapping websites and online data services.
Your organization is beginning a new conservation project on the border of Montana and Wyoming. The project management team needs to understand the natural features of the site to plan field data collection efforts. Your job is to locate relevant geospatial datasets, communicate their opportunities and limitations, and share them with the team in a user-friendly format.
At the successful completion of Lesson 2, you will have:
If you have questions now or at any point during this lesson, please feel free to post them in the Lesson 2 Discussion.
This lesson is one week in length and is worth a total of 100 points. Please refer to the Course Calendar for specific time frames and due dates. To finish this lesson, you must complete the activities listed below. You may find it useful to print this page out first so that you can follow along with the directions. Simply click the arrow to navigate through the lesson and complete the activities in the order that they are displayed.
SDG image retrieved from the United Nations [1]
One of the first steps in any geospatial project is finding data and metadata related to your topic and study area. I like to think of this phase as detective work. You often need to search for detailed clues in many different places before you can understand the bigger picture. For example, the same data set can often be obtained from multiple agencies, in multiple formats, and in multiple geographic packages (e.g., grouped by state or county vs. seamless).
You may need to consult several different sources to find all of the information you need to use the data, such as date, scale, description of coded values, etc. You may also use different sources to pre-screen and download the data. These websites are often hyperlinked to each other, so you may bounce back and forth a few times before landing in the right spot. You may find that some interfaces and data products are much easier to work with than others. We will experiment with a few different data providers to demonstrate this concept. The keys to success are budgeting ample time, keeping detailed notes along the way, and asking the right questions before you begin your search.
The best place to start looking for geospatial data is on the web. There has been a push to democratize environmental and climate-related data, and we will take full advantage of that initiative. I have listed a few different types of websites, typical data you will find on them, and links to some example sites below. This is not meant to be an exhaustive list, but rather an overview to get you pointed in the right direction.
Most websites provide links to download raw GIS and geospatial data that you can input into spatial analyses. Shapefiles, geodatabases, GeoJSON, and rasters are typically available for download in one or more of the following options:
GIS and geospatial files from Options 2 and 3 are typically aggregated by one or more geographic units such as counties, 7.5‘ topographic quadrangles (topo quads), or watersheds. You may need to download multiple files to cover your entire study area, and then merge them into a single data set using ArcGIS. The higher-quality sites typically offer interactive maps where you can browse available GIS and geospatial data and metadata.
Several years ago, finding information in a readable format was one of the most challenging parts of geospatial work. This is no longer the case, as most government data sets have been converted into GIS and geospatial formats accessible on the Internet. Typically, government data is available in at least two different formats: raw geospatial files (e.g., shapefiles, geodatabases, rasters) and online data services. You are likely familiar with working with raw GIS data within ArcGIS Pro or using online data services such as the ArcGIS Living Atlas.
Online data services are geospatial layers that you can connect to via the Internet. One of the major benefits of online data services is that they contain seamless versions of data. Seamless data sets combine individual data sets from different locations, scales, and time periods into one dataset. This lets you view and interact with hundreds to thousands of individual data sets simultaneously. For example, you may have worked with paper versions of topographic maps in the past. Each paper map only shows a finite area (e.g., 7.5 minutes) at one scale (e.g., 1:24:000). If you want to view a larger area or a different scale (1:100,000 or 1:250,000), you would need to gather many different paper maps. Using a seamless map service, you only need to use one data product to access the information from all of these paper maps at the same time. As you zoom to different scales, the underlying data source changes automatically. For example, if you zoom out to view an entire state, the map will display scans of the 1:250,000 maps. As you zoom in closer, the images will be replaced by more and more detailed data sets (1:100,000, 1:24,000).
While seamless datasets can be extremely valuable, they also have their drawbacks. For example, many seamless data sets were created by digitally stitching together multiple adjacent data layers that were created at different time periods. Mosaicking them together into one dataset gives the impression that the metadata of the underlying data sets are uniform when they are not. You must be careful using seamless data sets if time is an important variable in your analysis. This is only a concern if the data were not collected continuously, such as via satellite. Examples of continuous data include digital elevation models and products derived from remote sensing sources such as the National Land Cover Data Set (NLCD).
You can view online data services in a variety of ways. For example, you can use viewers embedded in an organization's website, ArcGIS.com, or add them directly to your layout in ArcGIS Pro. Interactive mapping websites allow you to view and interact with online data services using any Internet browser. Sites will usually include a map viewer, legend, tools to interact with your data such as zoom and identify, and tools to download subsets of data directly from the interactive map. Interactive maps allow you to customize what is displayed on the map by turning available layers on and off in the legend. They may also enable you to view the underlying attributes of each data source.
You will find that the quality and user-friendliness of online interactive map viewers vary dramatically depending on the organization and software used to create them. For example, on some websites, the identify tool only allows you to identify features within one layer at a time. You have to specify which layer is “active” in the legend to view its attributes. On other sites, you must manually refresh the map by clicking on a button every time you turn layers on and off.
Adding online data services directly to your ArcGIS session gives you many of the benefits of interactive mapping websites while providing much more flexibility to customize your map. Depending on the type of service, your options for controlling how the data are displayed are limited. For example, you may be unable to change certain aspects of the symbology or use them for input into geoprocessing tools such as the Clip Tool. They often have scale-dependent rendering settings that you may be unable to alter. Aside from these limitations, there are many benefits to using online data services. They can save a lot of time since you don’t have to download each data set individually and set the symbology for each one. This could trim a few days from your work schedule if you use many complex data sets.
Interactive mapping websites are a great way to get to know your study area and check the availability of several data sets simultaneously, but they may lack tools for robust spatial analysis. Connecting to map services or the AGO Living Atlas within ArcGIS is an easy way to create base maps, combine data from multiple sources, or integrate your own data layers with publicly available data. Since the data come pre-symbolized, you can save a lot of time setting up your map. Working with raw data gives you the most flexibility as far as interacting with your data within ArcGIS. However, there is typically a steep learning curve in figuring out which attributes to use to symbolize your map and use for your analysis. This can become a very time-intensive exercise. It is best to download only the datasets that you need to modify or input into an analysis project and rely on online data services for the remaining data.
Once you have located and acquired your data, your job is only just beginning. Your input data will likely come from several different sources, have a variety of data formats and extents, cover a range of time periods, and include many different attributes. You need to be aware of these properties before you start to work with your data. A lot of this information is not immediately obvious just by looking at the files. You will need to locate metadata documents to figure out many of the details. You will find that the quality of metadata necessary to understand and work with data varies depending on the source. Oftentimes, official FDGC metadata files are not packaged with the data. It is also possible that the metadata will be packaged with the data but not in a format recognized by ArcGIS (e.g., PDF or Word Document). This means you won’t be able to view the metadata in ArcGIS. If metadata files are not packaged with the raw data, you can usually find the information you need somewhere on the source website, by doing a general Internet search or by contacting the agency or organization that created the data. You may need to visit several different websites to find all of the information you need to answer all of the questions below. Sometimes, one of the most time-consuming parts of an analysis project is figuring out what different fields and attribute values mean (e.g., coded or abbreviated values).
Instructions: Watch the seven short videos below (~20 minutes total) and review the two Esri web pages listed as required readings. You will need information covered to complete the Lesson 2 Quiz. You may want to print the quiz from Canvas and keep track of your answers as you watch each video.
Video 1
Video 2
Video 3
Video 4
Video 5
Video 6
Video 7
Reading 1: Skim the Esri Living Atlas of the World Story Map [72] and browse the Esri Living Atlas [73] content.
Reading 2: Skim the content of the Web App Builder for ArcGIS help information about the Swipe widget [74]
This section provides links to download the Lesson 2 data and reference information about each dataset (metadata). Briefly review the information below so you have a general idea of the data we will use in this lesson. You do not need to click on any hyperlinks, as we will do this in the Step-by-Step Activities.
In this lesson, we will experiment with two types of online datasets: Esri Base Maps and data services hosted by the U.S. Geological Survey's National Map.
The websites and servers may occasionally experience technical difficulties. If you happen to work on this lesson while one of the sites is down, you may need to stop work and start again the following day to allow time for the servers to come back online. Beginning this lesson before Wednesday will help avoid any issues.
Note: You should not complete this activity until you have read through all of the pages in Lesson 2. See the Lesson 2 Checklist for further information.
Create a new folder named "GEOG487" directly on your C drive. It should have a pathname of "C:\GEOG487." You will use this folder for the remaining lessons. It is very important that your pathname is short and has no spaces in it, as this will cause problems later in the course when we use geoprocessing and spatial analyst tools.
Create a new folder in your GEOG487 folder called "L2." Download a zip file of the Lesson 2 Data [75]and save it in your "L2" folder. The rest of the datasets we will use in the lesson are available as online data services. Information about all datasets used in the lesson is provided below:
Study_area.shp: Shapefile showing the boundaries of the proposed project.
Metadata:
You can also view the metadata for layers in ArcGIS Online by clicking on the "Show Properties" tab. When you hover over each dataset, you should see the horizontal ellipsis or more options menu. Click on the ellipsis > Show Properties > Information.
Additional resources:
Sentinel-2 10m Land Use/Land Cover Time Series [85] - Global map of Land Use/Land Cover (LULC) from ESA Sentinel-2 imagery
In the Step-by-Step section for Lesson 2, we will explore a variety of environmental geospatial datasets available online, review metadata, create maps using ArcGIS.com, and share the maps as an interactive web application.
Note: You should not complete this step until you have read through all of the pages in Lesson 2. See the Lesson 2 Checklist for further information.
In Lesson 2, we are going to create a website that contains interactive maps with datasets related to our project scenario described in the Introduction. We will create maps and web applications using ArcGIS Online. We will save our maps in the cloud using Penn State’s ArcGIS Online for Organizations account. Our final product will look like the example below.
This lesson will provide many details and graphics illustrating how to do each step using ArcGIS Online. Later lessons will not provide as much detail, as we expect you to reference previous lessons and explore help topics if you get stuck.
It is important to understand the opportunities and limitations of your input datasets before you begin working with them. We will do this by exploring the available metadata. As a geospatial professional, you will often be the only person reviewing this level of detail about the datasets. It will be your job to communicate what you find with the rest of your team.
Metadata | Imagery | Land cover |
---|---|---|
Timeframe |
||
Scale |
||
Organization(s) that Created Data |
||
Organization Hosting Web Data |
||
Citation Information Requested by Data Provider |
||
Description of Coded Attribute Values Included? Y/N |
Were there particular pieces of information that were harder to find than others? Did you notice any differences in the quality and ease of use of different data provider’s websites?
Note: Critical Thinking Questions are not graded. They are provided to help you think about the lesson concepts. I encourage you to share your thoughts on the lesson discussion forum.
Before you can access ArcGIS Online, you need to confirm your account. We will only have to do this once to have access for the rest of the semester. ArcGIS Online has a feature that helps us to manage a group such as this course. In order to take advantage of the Group features, I will need you to "Sign In" to the Penn State ArcGIS Online organization using your Penn State Username and Password. Follow the steps below to Sign In and confirm your account for the first time. There are directions at arcgis.com to create a Personal Account that you can use to complete this course. Note: If you are not enrolled in the class, you will not have access to Penn State’s ArcGIS Online for Organizations account.
Please do not log in using an account that you created outside of the program. According to Esri’s website, “you will transfer ownership of your items to Penn State's Online for Organizations” for any content that is saved in the account you log in with. This means that any instructors or students using the Penn State account will be able to have administrative rights to your preexisting content.
Note: There is a group for each semester.
Note: Data layers often have default names that will be meaningless to most people who read your Map (like your boss and clients). Make sure you review layer names and change them, so your maps make sense to people other than yourself.
Which National Park is the Study Site near?
Which state(s) is the Study Area located in?
How much detail can you see in the imagery if you zoom in close?
The second dataset we will explore is part of the United States Geological Survey (USGS) National Map. The land cover data service is hosted on their website and server. We can access the data in ArcGIS Online and by using their website (there are other options too).
After exploring the same dataset using ArcGIS Online and the online viewer provided by the data creation agency, can you think of any scenarios where one method would be preferable over the other?
Now that we have created individual maps for each dataset, we can combine them into one using a template web application provided by Esri. You can use these templates to share your data in an easy-to-use format. Note: You need to be a member of the GEOG487 group to complete this section.
We will be sharing our work with the class throughout this course in ArcGIS Online. Please follow Penn State’s Academic Integrity guidelines covered in the Syllabus. (As a group administrator, I will be able to see if you created your own maps or made a copy of another student’s work).
I encourage you to view other students' work to learn and be inspired. If you incorporate any of their ideas in your own work, please list their name and map URL in your sources.
When designing maps and applications, it is best to assume that your end users don’t know where the study area is and are not familiar with the data – where it came from, what it is supposed to be used for, etc.
It is your job to point them in the right direction by crafting descriptive titles, useful captions, and helpful legends. Be nice to your audience! A good rule of thumb is to show your map to a non-geospatial friend. If they look confused, you need to revise it.
Good captions describe what a map shows AND why the reader should care.
Bad Caption – “This map shows the study area in red.”
Good Caption – “The study area (shown in red) is located near Yellowstone National Park along the border of Montana and Wyoming. The terrain consists of steep mountains and valleys, making transportation by car difficult.
That’s it for the required portion of the Lesson 2 Step-by-Step Activity. Please consult the Lesson Checklist for instructions on what to do next.
Review some of the other configurable web application templates at ArcGIS> Gallery > Apps [89]. Do you think the Swipe/Spyglass WebApp widget is the best choice to present the two datasets from our lesson? Post your thoughts in Canvas Lesson 2 Discussion.
Try This! Activities are voluntary and are not graded, though I encourage you to complete the activity and share comments about your experience on the lesson discussion board.
Advanced Activities are designed to make you apply the skills you learned in this lesson to solve an environmental problem and/or explore additional resources related to lesson topics.
In Lesson 2, we learned about several places to search for GIS and geospatial data online and created a web app to help a team get acquainted with their new field site. We only included a few pieces of information so far - imagery, land cover, and the study area boundary. Browse or search through the available datasets in ArcGIS Online (Click on Map > + Add layer > select Living Atlas from the pulldown and browse or search for available layers). Choose another dataset you feel would support the team's mission and add it to the map. Include the Study Area Boundary and Imagery Hybrid basemap. Save your map as "Map 3 - Your Dataset Name." Update the tags and description and share it with the class in ArcGIS Online. Esri has recently added a lot of new data to AGO (including ArcGIS Living Atlas live feeds [90] that you can search for when you add features using +Add layer in the Map Viewer).
In Lesson 2, we discussed how and where to find geospatial data for environmental projects and common formats of GIS and geospatial data available on the Internet. We also used several online data services to create an interactive mapping application in ArcGIS Online for Organizations. There are both pros and cons of using online data services. In the next lesson, we will compare and contrast them with using raw geospatial data.
Lesson 2 is worth a total of 100 points.
Swipe Map App | Link to swipe map app is present and includes proper imagery and land cover maps. (20pts) | Link is present, but app is missing an element. (15pts) | Link is present, but app is missing several elements. (10pts) | Link is missing. (0pts) | 20pts |
---|---|---|---|---|---|
Advanced Activity Map | Link to map is present and includes imagery, study area boundary, and one additional layer. Map is properly described. (20pts) | Link is present but is missing an element (map layers, descriptions.) (15pts) | Link is present but is missing several elements (map layers, descriptions or does not function properly. (10pts) | Link is missing. (0pts) | 20pts |
Reflection | Discussion is present and includes 150-300 words addressing other applications for this activity, benefits, and drawbacks of online maps, and why the 3rd data layer was chosen. (15pts) | Discussion is present but is missing a required topic. (15pts) | Discussion is present but is missing several required topics. (10pts) | Discussion is missing. (0pts) | 20pts |
Prose Quality | Is free or almost free of errors (complete sentences, student's own words, grammar, spelling, etc.). (10pts) | Has errors, but they don't represent a major distraction. (5pts) | Has errors that obscure meaning of content or add confusion. (0pts) | 10pts | |
TOTAL | 70pts |
If you have anything you'd like to comment on or add to the lesson materials, feel free to post your thoughts in the Lesson 2 General Discussion. For example, what did you have the most trouble with in this lesson? Was there anything useful here that you'd like to try in your own workplace?
This page includes links to resources such as additional readings, websites, and videos related to the lesson concepts. Feel free to explore these on your own. If you'd like to suggest other resources for this list please send the instructor an email.
In Lessons 3 and 4, we will practice completing many of the typical required steps in a GIS workflow:
We will explore these ideas in the context of a wetland restoration project in the Great Lakes Region, where we will discuss what wetlands are, why they are important, and common environmental concerns. We will use both publicly available and private data sets, time series, site-level data created as part of a cooperative research project between the Environmental Protection Agency, the U.S. Fish and Wildlife Service, the U.S. Geological Survey, and the University of Michigan.
You are part of a research team tasked with creating a restoration plan for a portion of a degraded wetland complex. You need to understand how the vegetation within the wetland has historically responded to changes in water levels. This information will enable you to predict the health of the wetland in future scenarios, including anticipated hydrological changes due to climate change. You begin by searching for publicly available sources of data for your analysis. You find that there isn't a dataset that has sufficient detail about vegetation for your study area. Furthermore, you are unable to find a dataset that shows wetland vegetation at multiple points in time. Your team hires a remote sensing specialist to acquire and interpret historical imagery and digitize polygons representing vegetation over time. Your job is to figure out how to use the vegetation data and GIS software to understand the relationship between fluctuating water levels and changes in vegetation.
If you have questions now or at any point during this lesson, please feel free to post them to the Lesson 3 Discussion.
This lesson is worth 100 points and is one week in length. Please refer to the Course Calendar for specific time frames and due dates. To finish this lesson, you must complete the activities listed below. You may find it useful to print this page out first so that you can follow along with the directions. Simply click the arrow to navigate through the lesson and complete the activities in the order that they are displayed.
SDG image retrieved from the United Nations [1]
One of the amazing aspects of GIS is the ability to combine information about multiple topics from multiple time periods and from multiple sources into one place and then analyze them spatially. There are tradeoffs to consider for this convenience. As we saw in Lesson 2, before you can use data that you did not create yourself, you need to invest a great deal of time to acquire and understand each dataset. The more data sets you include, the more time you need to spend on these tasks. After you have acquired and understand your input datasets, you still need to customize them for your project. Other time-consuming tasks include interpreting the results of your analysis and figuring out how to best communicate them to your target audience. The analysis itself can be the quickest part – you typically just need to click a few buttons and wait for a GIS tool to run.
Customizing data for your project involves two main tasks: 1) modifying your input datasets so that they are consistent enough to combine them in spatial analysis and 2) modifying them so that they can answer your specific questions in your study area. The specific sub-tasks can be grouped into two main categories: spatial tasks and attribute tasks. It is better to address spatial issues first since you will likely add or remove records from your attribute tables in the process. Examples of each type are described below:
Spatial Tasks | Attribute Tasks |
---|---|
Convert Data Format | Understand Coded Values |
Resolve Projection / Misalignment Issues | Recode Missing Data |
Customize Data Organization | Recode Typos |
Correct Topology Errors | Reclassify Attributes |
Modify Extent | Create New Attributes |
Confirm Scale | Convert Units |
According to ESRI, project on-the-fly in ArcGIS works better for vector data since the process of projecting rasters on the fly is so much more complex than projecting vector data. Projecting data on the fly, regardless of whether the data is vector or raster, does not always produce consistent results. Sometimes, it works perfectly; other times, it does not. You can read more about it in this Esri Blog Post, "Projection on the fly and geographic transformations [98]"
What are wetlands? Wetlands can be broadly described as transition zones between water and land. They are notoriously hard to define because their characteristics vary greatly depending on their location and the environment in which they are located. One trait all wetland varieties share is that they have properties of both upland and aquatic environments that create unique ecosystems.
Wetlands are important for several reasons. First, they support a vast array of life with biodiversity and population counts comparable to tropical rainforests and coral reefs. For example, they are used as nesting and feeding grounds by many species of migratory birds, and most fish and shellfish are dependent on wetlands for some portion of their lifecycle. Second, wetlands help regulate the flow of water over large regions. During extremely wet periods, wetlands absorb and store excess water, preventing floods and associated damage. Third, wetlands help to recharge groundwater aquifers, a source of drinking and irrigation water, during times of drought when rain is scarce. Fourth, wetlands help to filter and clean water. As water enters a wetland, its speed is drastically reduced, mitigating possible erosion of valuable soils. Reducing water speed also causes suspended and dissolved particles, such as pollutants and nutrients, to drop out of the water when they enter a wetland. Plants and microorganisms living in the wetland then absorb and break down these particles. Artificial and natural wetlands are often used to treat stormwater and wastewater for this reason. And fifth, the combination of water and wildlife found in wetlands support several types of recreation, such as fishing, boating, hiking, and bird watching.
Unfortunately, wetlands are often threatened by human activities. Wetlands can either be completely eliminated or degraded so much so that their ecosystem cannot function. For example, wetlands are often drained to expose new land for agriculture or development or are flooded to create lakes. Over 96% of the original wetlands along western Lake Erie have been lost in this manner since the 1860s. In addition, runoff from lawns and impervious surfaces can add excessive amounts of pollutants such as fertilizers, pesticides, and sediment, which degrade the wetlands that absorb the material. A common land management technique is to build earthen dikes around wetlands, causing them to be hydraulically separated from surrounding areas. This artificial process eliminates the natural cycle of high and low water levels necessary for vegetation regulation. It also limits the movement of small biota in and out of wetlands, which is critical for the reproduction of many species.
Wetlands are also threatened by the spread of invasive species, also known as non-native, or exotic species. Both plants and animals can be considered invasive. These species are naturally very adaptable, and aggressive, and have a high reproductive capacity. They are considered invasive only when they spread outside of their natural range, where they out-compete native species due to their vigor and lack of natural enemies. Once established, they are extremely difficult to eliminate. Their presence in an ecosystem often causes economic, human health, and environmental damage. Some examples of invasive species in the Great Lakes Region are purple loosestrife, common reed, reed canary grass, narrow-leaved cattails, hybrid cattails (narrow/broad-leafed), emerald ash borer, common carp, sea lamprey, zebra mussels, and West Nile virus.
Recognizing the importance of wetlands is a relatively recent initiative. For example, the Ramsar Convention, an international treaty for the conservation of wetlands, wasn’t adopted until the mid-1970s. The U.S. North American Wetlands Conservation Act, which provides funding to protect and manage wetland habitat, wasn’t enacted until 1989. Since then, government agencies have created a set of laws regulating the use and management of wetlands. They also established a network of protected wetland areas that are managed by various state and federal agencies in which wetland managers try to restore degraded wetlands while attempting to balance the competing interests of recreation, habitat for particular species, and the spread of invasive species.
We are going to explore several of the data customization concepts described above in the context of a historical wetland restoration project within a federally protected area. The case study site is located in the Ottawa National Wildlife Refuge [99], located about 20 km east of Toledo, Ohio.
There are several required readings for Lesson 3. The first one describes a previous wetland restoration project involving GIS led by some of the same researchers who created the vegetation data we will use in Lessons 3 and 4. The wetland described in the report, Metzger Marsh, is adjacent to the study site. The second reading provides more information about invasive species in the Great Lakes. The third fact sheet describes the actual restoration work completed at the study site. The last link is additional information about a Magee Marsh Wildlife Area wetland improvement project, which is also adjacent.
This section provides links to download the Lesson 3 data and reference information about each dataset (metadata). Briefly review the information below so you have a general idea of the data we will use in this lesson. You do not need to click on any of the hyperlinks as we will do this in the Step-by-Step Activities.
In this lesson, we will experiment with two different types of data providers, both public and private. For the publicly available data, we will use a combination of online data services and raw GIS files, which you will have to download yourself. The private data is included in the zip file below.
Keep in mind, the websites and servers of public data providers may occasionally experience technical difficulties. If you happen to work on this lesson while one of the sites is down, you may need to stop work and start the following day again to allow time for the servers to reboot.
Note: You should not complete this activity until you have read through all of the pages in Lesson 3. See the Lesson 3 Checklist for further information.
Create a new folder in your GEOG487 folder called "L3." Download a zip file of the Lesson 3 Data [104] and save it in your "L3" folder. Extract the zip file and view the contents. Information about all datasets used in the lesson is provided below:
Base Map:
USDA Farm Service Agency - National Agricultural Imagery Program (NAIP):
Boundaries of Fish & Wildlife Service Lands:
National Wetlands Inventory: The U.S. Fish and Wildlife Service National Wetlands Inventory classifies wetlands into 5 major ecological types, subdivided into numerous categories.
Great Lakes Coastal Wetland Inventory: Detailed wetlands inventory of the Great Lakes Region developed through the Great Lakes Coastal Wetland Consortium.
The Step-by-Step Activity for Lesson 3 is divided into two parts. In Part I, we will look at publicly available datasets from Esri, the Fish & Wildlife Service National Wetlands Inventory, the Great Lakes Coastal Wetlands Inventory, and aerial photos from the USDA Farm Service Agency. In Part II, we will explore site-level data from three time periods that was digitized from high-resolution aerial photos. We will use the various datasets listed above to practice many of the data customization tasks covered in the lesson text.
Note: You should not complete this step until you have read through all of the pages in Lesson 3. See the Lesson 3 Checklist for further information.
Part I, we will explore our study area and the time-series aerial photos used to digitize the vegetation data we will use in Part II. We will also look at two publicly available datasets specifically related to wetlands: the National Wetlands Inventory from the U.S. Fish & Wildlife Service and a more detailed wetlands inventory from a regional public agency called the Great Lakes Commission. In the process, we will explore several different data delivery options and sources in ArcGIS: Esri Map Packages, Esri Basemaps, ArcGIS Online Datasets, Web Map Services (WMS) from GIS Servers, and raw GIS files.
You can share your own data and maps by creating a map package in ArcGIS Pro. Go to the Share tab, Package group, and select Map Package . The file can either be uploaded to ArcGIS Online or saved locally.
Setting your Current Workspace allows you to customize the location of where output files created during geoprocessing steps are saved. By default, files will be saved at …My Documents\ArcGIS.
To easily access information related to geoprocessing parameters and environments in ArcGIS: Click on the information icon which will open a dialog window with information about usage and options. It is also a good idea to read the help information related to tools you are not familiar with (click or go to the Project tab > Help).
Take a minute to look at the swipe and flicker tools available on the Raster Layer, Appearance tab, Compare group (be sure an image is highlighted in the Contents pane) These tools can be useful for temporal change detection (especially of satellite images or air photographs that were taken at different times of the same location), data quality comparison, and other scenarios where you want to visually compare the differences between two layers in your map. Swipe allows you to interactively reveal what is underneath a particular layer; Flicker flashes layers on and off at the rate you specify. You can read more about these tools in Esri help.
The publicly available datasets we just explored are helpful for familiarizing yourself with your study area or for regional or other large-scale analyses. However, they do not contain the level of detail for the site level analysis we want to conduct. The highest resolution data you can usually find is 1:24,000 scale for vector data and 30 m cell size for raster data. Publicly available datasets also typically do not have time-series information available. In Part II, we are going to explore a high resolution, time-series dataset that was digitized from the aerial photos we reviewed in Part I. Oftentimes, you will need to digitize information in this manner if you have a small study site or if you want to do an in-depth, time series analysis. The work required to create the data is significant. However, you can do a lot more with your data.
We want to explore how vegetation has changed over time in our study area. To answer our research questions, we need the following datasets: 1) polygons of vegetation species over time 2) polygons of vegetation groups over time, and 3) polygons of invasive species over time. All of the files need to show just the region within our study site. We will create these custom datasets for three time periods using the Join, Union, Clip, and Dissolve tools. The workflow we will follow is illustrated in the diagram below using the data for the seventies time period. You may wish to consult this diagram after completing each step.
Note: You may notice that some of the Veg_IDs are listed as “May Be Invasive” in the “Invasive” field. Two of the most common invasive species in the wetland (narrow-leaved and hybrid narrow/broad-leaved cattails) look very similar to native species (broad-leaved cattails,) which makes them difficult to distinguish in aerial photos.
Caution - watch out for similar attribute names like OID. This is not the same as Veg_ID.
Make sure you have the correct answer before moving on to the next step.
The 60s_Join, 70s_Join, and 00s_Join shapefiles should have the number of records and all of the attributes shown below. If your data does not match this, go back and redo the previous step.
Geodatabases may have naming restrictions for table and field names. For instance, a table in a file geodatabase cannot start with a number or a special character such as an asterisk (*) or percent sign (%). Shapefiles do not have such restrictions and allow us to use names such as 60s_Join.
If you receive an Error 000361: The name starts with an invalid character during geoprocessing, check to make sure you are saving your output as a shapefile.
It’s very easy to make mistakes when using geoprocessing tools. For example, you can select the wrong input files by mistake. Another common error is running tools while unknowingly having records selected. Any output from geoprocessing tools will only contain the selected records. Comparing your results with your input datasets after using automated tools is a good habitat to get into.
If you want to double-check the input files you used previously, the parameter settings, environment settings, etc., you can view them under Geoprocessing > History.
Make sure you have the correct answer before moving on to the next step.
If your data does not match this, go back and redo the previous step.
Make sure you have the correct answer before moving on to the next step.
If your data does not match this, go back and redo the previous step.
Specifying a specific precision and scale when adding a field to a shapefile gives you the option to limit the number of digits (precision) and decimal places (scale) of values within number fields. There are many situations where you would want to do this. However, there are also occasions where it is best to keep all of your options open. Accepting the default value of 0 for both properties gives you the most versatility. It may seem counterintuitive, but the value of 0 acts somewhat similar to the value of infinity in this case. Setting custom precision and scale values is only relevant to data stored in an enterprise geodatabase. Default values are always enforced when editing data in a shapefile or file geodatabase. Refer to the ArcGIS field data types [129] for more information.
I recommend using values of 0 when you are in the preliminary stages of data exploration. That way you won’t unknowingly exclude values in your results. For example, if you are calculating area values for the first time, you probably won’t know how many digits you will need to store your calculated values (precision) until after you’ve made the calculation. If you estimate a number to use for precision that ends up being too low, you will not be able to store the full range of values. For example, a precision of 2 would limit your values to two digits, whereas a precision of 4 would limit your values to four digits.
Summary Statistics tool (go to the Analysis tab, Geoprocessing group >Tools > search "Summary_Statistics") is another option you can use to calculate statistics for your data. This tool is similar to the “Summarize” option available by right clicking on a field in an attribute table. The advantage of the Summary Statistics tool is that it allows you to create statistics based on multiple fields. For example, you could use it to find the total area for every unique combination of vegetation type and invasive classification. You could interpret the results to find out which plant type makes up the majority of invasive species for each time period.
Multipart polygons are features that have more than one polygon for each row in the attribute table. If you want to explode these into individual records at a later time, there is a tool available on the Edit tab, in the Tools group.
That’s it for the required portion of the Lesson 3 Step-by-Step Activity. Please consult the Lesson Checklist for instructions on what to do next.
After experimenting with online data services in Lesson 2 and raw data in Lesson 3, which do you think is easier to work with? What are the pros and cons of each one? Can you think of any scenarios in which one is preferable over the over?
Do you have a good understanding of why we completed each step in Part II? If not, compare the starting vegetation files and final outputs (XX_Species, XX_VegGrp, XX_Invasive) in terms of extent, area, gaps, spatial detail, and attributes.
Note: Try This! Activities are voluntary and are not graded, though I encourage you to complete the activity and share comments about your experience on the lesson discussion board.
Advanced Activities are designed to make you apply the skills you learned in this lesson to solve an environmental problem and/or explore additional resources related to lesson topics.
Repeat the data customization steps from Part II in the Step-by-Step Activity using the “thirties.shp” and “fifties.shp” shapefiles in your L3 folder. You may want to read the related quiz questions within the Lesson 3 Quiz before completing the activity so you know what information to look out for.
In Lesson 3, we talked about wetland conservation, management, and restoration and explored both public and private sources of GIS data for a wetland area on Lake Erie in Ohio. We also practiced customizing data for a particular site and set of research questions. In the next lesson, we will use the data we created to demonstrate how to interpret and present results from time series analyses.
Lesson 3 is worth a total of 100 points.
If you have anything you'd like to comment on, or add to the lesson materials, feel free to post your thoughts in the Lesson 3 Discussion. For example, what did you have the most trouble with in this lesson? Was there anything useful here that you'd like to try in your own workplace?
This page includes links to resources such as additional readings, websites, and videos related to the lesson concepts. Feel free to explore these on your own. If you'd like to suggest other resources for this list, please send the instructor an email.
Federal agencies such as the United State Geological Survey and the U.S. Department of Agriculture Aerial Photography Field Office distribute medium resolution imagery products. These are typically available for download in tiles or are hosted as seamless GIS services like the ones we used in Lessons 2 and 3. State and Local governments often collect and distribute their own high resolution, time series imagery products. These datasets can be distributed on websites similar to federal imagery products. They may be a fee to access these products, or you may need to contact the organization directly to obtain them. There are also private companies that collect high-resolution imagery which they make available for sale.
In Lesson 3, we created several custom datasets for our study area wetlands within the Ottawa National Wildlife Refuge. These data contain information about plant species, vegetation groups, and invasive species for five snapshots in time between 1939 and 2005. In Lesson 4, we will use these datasets to understand how vegetation changes in response to water level fluctuations. In particular, we are interested in how emergent vegetation changes, since this group of plants provides the highest quality habitat in the wetland. We are also interested in how invasive species spread over time. Comparing multiple datasets over many time periods can get a bit complicated. In this lesson, we will explore several tools to make it easier to identify trends over time between multiple datasets.
Lesson 4 is a continuation of the scenario from Lesson 3 - "You are part of a research team tasked with creating a restoration plan for a degraded wetland complex. You need to understand how the vegetation within the wetland has historically responded to changes in water levels. This information will enable you to predict the health of the wetland in future scenarios, including anticipated hydrological changes due to climate change. You begin by searching for publicly available sources of data for your analysis. You find that there is not a dataset that has sufficient detail about vegetation for your study area. Furthermore, you are unable to find a dataset that shows wetland vegetation at multiple points in time. Your team hires a remote sensing specialist to acquire and interpret historical imagery and digitize polygons representing vegetation over time. Your job is to figure out how to use the vegetation data and GIS software to understand the relationship between fluctuating water levels and changes in vegetation."
At the successful completion of Lesson 4, you will have:
If you have questions now or at any point during this lesson, please post them to the Lesson 4 Discussion.
This lesson is worth 100 points and is one week in length. Please refer to the Course Calendar for specific time frames and due dates. To finish this lesson, you must complete the activities listed below. You may find it useful to print this page out first so that you can follow along with the directions. Simply click the arrow to navigate through the lesson and complete the activities in the order that they are displayed.
SDG image retrieved from the United Nations [1]
Last week, we talked about why wetlands are important, threats to wetlands such as human activities and invasive species, and wetland protection and restoration programs. This week, we will discuss how wetlands function. Hydraulic conditions are very important in wetland ecosystems because they influence their physical and chemical properties. Water depth is particularly important because it influences which types of vegetation are present, their abundance, and where they grow. Certain types of vegetation provide much better habitat than others. For example, aquatic and emergent vegetation provide cover that fish need to hide from predators and raise their young. Plants can be grouped into a few main categories based on the depth of water they prefer (listed from deepest water to dry land): submersed aquatic, floating aquatic, emergent, and terrestrial (e.g., shrubs and trees). These groups should look familiar to you (Hint: look in the Veg_Group shapefiles we created last week).
Wetlands are very dynamic; the physical and chemical properties of a given wetland can vary depending on current hydraulic conditions within a watershed. Some wetlands fluctuate more than others, especially those that are hydraulically connected to larger bodies of water such as drowned river-mouth wetlands along the Great Lakes or tidal salt marshes along the nation’s coasts. In these types of wetlands, the water elevation of the wetland rises and falls in response to water elevation changes in the main body of water. Water levels can fluctuate at different time scales, such as centuries, decades, annually, seasonally, daily, and even hourly. For example, the graph below shows the water levels of Lake Erie between 1850 and the present. You can see that water levels can fluctuate by more than 3 ft in a period of one or two years.
As water levels rise and fall, the water depths at any given location within a wetland will vary based on its bathymetry. For example, in an area with gentle slopes, an increase in water elevation will be spread over a larger area, so the water depth will not increase as much as in an area with steep slopes. This constant change in water depths naturally regulates plant communities. During periods of lower water levels, species that require deep water don’t survive. At the same time, underlying soils are exposed, allowing seeds from a variety of plants to germinate and mature. The opposite is also true. During periods of high water, species that require shallow water are drowned and eliminated. When the hydraulic properties of a wetland are modified, the natural cycle of vegetation regulation and regeneration is disturbed. Without low water levels to control their growth, some species are able to thrive season after season while others are never given the opportunity to grow.
The drowned-river mouth wetlands within our study are part of the Ottawa National Wildlife Refuge, which was created in 1961 to preserve vital habitat for migratory birds. The refuge contains approximately 4,500 acres of wetlands, the majority of which have been diked to control their water levels for over 60 years. Refuge managers use the dikes to maintain a series of ponds with different water depths at different times of the year. This allows them to create habitat for a variety of species, though management techniques favor migratory birds. By mimicking the natural rise and fall of water levels within the diked wetlands, emergent vegetation, which is habitat for many species, is able to thrive. Without the dikes, much of the habitat would not exist. However, the dikes hydraulically disconnect the pools from Crane Creek and Lake Erie, so that only a subset of wetland species can utilize the habitat. For example, fish, clams, and other small organisms cannot travel over the dikes.
Only a small portion of the wetlands in the refuge are not diked (e.g., wetlands within the study site); however, they are severely degraded. These wetlands have the potential to provide critical habitat since they are still connected to Lake Erie. The frequent high water levels of Lake Erie since the 1970s have contributed to the lack of natural regeneration of emergent vegetation in the undiked wetlands. Without human intervention, it is unlikely that water levels will lower enough to re-establish the vegetation that fish use for spawning and protection of their young. Wetland managers are also struggling to control the spread of several invasive plants, that threaten the native flora and fauna, including giant reed-grass (Phragmites australis), reed-canary grass (Phalaris arundinacea), narrow-leaved cattail (Typha augustifolia), purple loosestrife (Lythrum salicaria), and flowering rush (Butomus umbellatus).
GIS is a powerful tool to help wetland managers. We know that wetlands fluctuate over time in response to changes in local and regional hydrological conditions. Historical aerial photos can help us understand these changes over time. For example, they can show how vegetation in a particular wetland has responded in the past to changes in water levels. Digitizing the vegetation into a GIS database is much more useful than just looking at the images. Once the data are in a GIS format, wetland managers can easily calculate statistics, identify trends, and create models that allow them to predict the types and abundance of vegetation they can expect at different water levels. For example, they could model future vegetation changes in response to water level fluctuations caused by climate change. They can also use the data to create baseline vegetation maps to evaluate restoration efforts, such as attempts to regenerate emergent vegetation, map the spread of invasive species over time, and evaluate control methods.
Last week, we used several publicly available datasets to familiarize ourselves with our study area wetlands. We also created several new datasets related to wetland vegetation, including species, vegetation groups, and invasive species. In Lesson 4, we are going to use this data to explore a real-world example of how GIS can be used to assist wetland managers in restoration efforts. We will also explore several methods in ArcGIS to interpret and compare multiple time-series datasets.
There are two types of required readings for Lesson 4, USGS information and Esri Help Topics, and a couple of websites that I would like you to explore. The first reading is a fact sheet that provides more information about the invasive species in our study area. The second link sends you to a Virginia Institute for Marine Science (VIMS)/Center for Coastal Resource Management (CCRM) page that outlines GIS methods that are used when producing a shoreline and tidal marsh inventory. The third is a link to the Virginia Coastal Resource Tools Portal and within that page is a link to the Virginia Comprehensive Map Viewer. The Comprehensive Map Viewer displays a specific example of the shoreline and tidal marsh inventory produced by the CCRM. Feel free to explore additional VIMS/CCRM pages, there are many additional links that provide tidal marsh inventory examples. There are also a handful of Esri help topics related to operations we will use in ArcGIS during the Step-by-Step Activity.
Find the help articles listed below on ArcGIS Pro Resource Center [143] website.
This section provides links to download the Lesson 4 data and reference information about each dataset (metadata). Briefly review the information below so you have a general idea of the data we will use in this lesson.
Note: You should not complete this activity until you have read through all of the pages in Lesson 4. See the Lesson 4 Checklist for further information.
Create a new folder in your GEOG487 folder called "L4." Download a zip file of the Lesson 4 Data [149]and save it in your "L4" folder. Extract the zip file and view the contents. Information about all datasets used in the lesson is provided below.
In Lesson 4, we will use many of the same datasets from Lesson 3, including the custom data sets we created in Part II of the Step-by-Step Activity. I have provided clean copies of these datasets in the zip file above. Please use these during Lesson 4, just in case you made an error during Lesson 3. You may want to compare the data you created in Lesson 3 to the provided datasets and see if there are any differences.
In Part I, we will explore tools to visually explore and compare multiple datasets, such as animations and layouts with multiple map frames. In Part II, we will explore tools to statistically compare multiple datasets, including calculating percent area and creating graphs. We will use both techniques to interpret our data and explore how vegetation within our study area changes over time as it responds to changes in water levels.
Note: You should not complete this step until you have read through all of the pages in Lesson 4. See the Lesson 4 Checklist for further information.
Part I, we will explore several tools and technique to make it easier to visually interpret patterns in your data using ArcGIS. These can be especially helpful when you have multiple datasets to compare.
Year | water Level (m) | High, Med, Low |
---|---|---|
1962 | 173.9 | |
1973 | 174.9 | |
2005 | 174.2 |
Based on the Lake Erie Hydrograph, how do the water levels for 1962, 1973, and 2005 compare to the long term averages for Lake Erie? Which years had the highest and lowest water levels between 1920 and the present?
One of the challenges of looking at time-series data of the same location is that all of the datasets overlap each other. It is very difficult to see all of the datasets at the same time if you have them all on the same map, especially if they are polygon files.
In this lesson, we arranged the layers within each group chronologically. You could also arrange them in a different order, such as by their water level (low, medium, high) to visualize how the vegetation changes correlate with water level changes.
Make sure you have the correct answer before moving on to the next step.
When you preview your animation, you should see one layer turned on at a time beginning with the VegGrp_60s and ending with the VegGrp_00s.
If your data is not close to the example, go back and redo the previous step. You’ll need to clear the animation first by going to the View tab, Animation group, select Remove.
If you want to be able to view your animation outside of ArcGIS, you can export your animation to a video file. You can also make your animations more sophisticated by exploring the available animation tools and options within ArcGIS. For example, you can add looping, string multiple animations together, add time-series labels, and add graphs that update over time along with your animation. You can find more information, such as help articles, sample animations, and tips in the Esri help topics.
Animations are great for emailing to a client or adding to a presentation. However, if you want to print your maps, you need to create a layout. We are going to create a layout with multiple map frames to make it easier to compare our data over time. When working with multiple map frames that show similar information, it is easier to set the symbology, extent, and scale in one map, then make copies of the map, instead of setting up each map separately.
The final map layout should include all of the following elements:
In ArcGIS Pro, if two or more map frames reference the same map, any manipulation to the layers in the map (such as turning any layer on or off or zooming in or out) affects both map frames because the layout is referencing the same Map. To bypass this, a separate Map must be referenced for each Map Frame in a Layout. Go to the Insert tab, Project group, and select New Map. Insert six New Maps to your project (each should default to a different name Map, Map1, Map2, Map3...).
Switch back to your original Map. Switch off the Open Street Map Basemap for now, as it will increase the loading time while you are setting up your layout. Adjust your scale and extent, right-click on “Study_Site” in the Contents pane > Zoom to Layer. Turn on the 60sVegGrp layer.
Hold down the control key and highlight the "Study_Site", "OttawaNWR", "Vegetation Group" and "OpenStreetMap" layers in the Contents pane. Right-click and select Copy.
Go to Map1, right-click on the map name in the Contents pane > Paste. Turn the Study_Site, OttawaNWR, and 70sVegGrp layers on. Do the same in Map2 but turn Study_Site, OttawaNWR, and 00sVegGrp layers on.
Switch back to your original Map. Adjust your scale and extent, right-click on “Study_Site” in the Contents pane > Zoom to Layer.
Hold down the control key and highlight the "Study_Site", "OttawaNWR", "Invasive Group" and "OpenStreetMap" layers in the Contents pane. Right-click and select Copy.
Go to Map3, right-click on the Map name in the Contents pane > Paste. Turn the Study_Site, OttawaNWR, and 60s_Invasive layers on. Do the same in Map4 but turn Study_Site, OttawaNWR, and 70s_Invasive the layers on. And, then in Map5 turn on Study_Site, OttawaNWR, and 00s_Invasive the layers on.
Go to Map6, right-click on the Map name in the Contents pane > Paste. Turn the Study_Site, and OttawaNWR layers on.
Adding neatlines to your map layouts helps to visually group elements together. This is helpful when your map has a lot of information. Go to the Insert tab, Graphics and Text group, and then click on Rectangle. After you place the rectangle in the layout, you can select it and right-click to format and adjust the symbology settings of the neatline.
Visually exploring your data is a good way to start interpreting your results. However, it is difficult to determine the magnitude of change just by looking at a map. Calculating statistics allows you to have actual numbers to work with, allowing you to say that “variable x increased by 12%” instead of “variable x increased.”
While calculating statistics, it is very easy to make mistakes such as typos, choosing incorrect input layers, or using incorrect order of operations. To avoid possible errors, you should first visually explore your data so you have an idea of the trends that exist in the data. After calculating statistics, you can compare your results to your visual interpretation to make sure your statistical results seem reasonable.
Study Year | Water Level (High, Med, Low) | Area Open Water (sq m) | Area Emergent Vegetation (sq m) | Area Invasive Species (sq m) | Area Controlled Invasive Species (sq m) |
---|---|---|---|---|---|
1962 | |||||
1973 | |||||
2005 |
Which year has the most emergent vegetation? Which year has the most open water? Did you find it difficult to compare such complex numbers (lots of digits and decimal places)?
Study Year | Water Level (High, Med, Low) | % Tot. Area Open Water | % Tot. Area Emergent Vegetation | % Total Area Invasive | % Tot. Area Controlled Invasive |
---|---|---|---|---|---|
1962 | |||||
1973 | |||||
2005 |
Which year has the most invasive species? Which year has the least open water? How does this correlate with water levels? Which files have the most missing data? After comparing several datasets using calculated areas and percent total areas, which technique do you find is easier to detect trends between multiple datasets?
After experimenting with both visual and statistical techniques to determine trends in your data, can you think of any scenarios in which one is preferable over the other?
That’s it for the required portion of the Lesson 4 Step-by-Step Activity. Please consult the Lesson Checklist for instructions on what to do next.
Advanced Activities are designed to make you apply the skills you learned in this lesson to solve an environmental problem and/or explore additional resources related to lesson topics.
Use the tools and techniques covered in the lesson and data within your L4 folder to answer questions related to the 1930s and 1950s data. You may want to read the related questions within the Lesson 4 Quiz before completing the activity so you know what information to look out for.
In Lesson 4, we explored several techniques to interpret data and compare multiple datasets over time. Lesson 4 concludes the two-part lesson in which we completed the typical required steps in a GIS workflow (acquire or create new data, understand data content and limitations, customize data for your project, design & run analysis, interpret results, present results). In Lessons 5-8, we will demonstrate how to use several tools in ArcGIS, AGO, and Spatial Analyst to address a variety of specific environmental questions.
Lesson 4 is worth a total of 100 points.
Map Layout | The layout is posted and includes the required elements (6 map frames and an overview map, legend, scale bar, north arrow, titles, water levels, data sources, and author). (20pts) | The layout is present but is missing one or two required elements. (15pts) | The layout is present but map is missing several elements or is poorly designed. (10pts) | Map is missing. (0pts) | 20pts |
---|---|---|---|---|---|
Reflection | Discussion is present and includes ~500 words addressing ways in which maps are effective, challenges to communicating this data, and other presentation options. (15pts) | Discussion is present but is missing a required topic. (10pts) | Discussion is present but is missing several required topics. (5pts) | Discussion is missing. (0pts) | 15pts |
Prose Quality | Is free or almost free of errors (complete sentences, student's own words, grammar, spelling, etc.). (5pts) | Has errors, but they don't represent a major distraction. (2pts) | Has errors that obscure meaning of content or add confusion. (0pts) | 5pts | |
TOTAL | 40pts |
If you have anything you'd like to comment on, or add to the lesson materials, feel free to post your thoughts in the Lesson 4 Discussion. For example, what did you have the most trouble with in this lesson? Was there anything useful here that you'd like to try in your own workplace?
This page includes links to resources such as additional readings, websites, and videos related to the lesson concepts. Feel free to explore these on your own. If you'd like to suggest other resources for this list, please post them in the Lesson 4 Discussion.
You have been hired by the Pennsylvania Department of Environmental Protection to determine how land cover has changed historically in southeastern Pennsylvania between 1978 and 2005. You know that land cover grid data is available for the time periods of interest, but the datasets are from two different sources. You also know that while each data set is similar, the land use/land change categories and codes do not match up perfectly between the different historical sources. You must use Spatial Analyst to help standardize all of the datasets and determine how much land area has changed over time by land cover category (agriculture, residential, etc.)
At the successful completion of Lesson 5, you will have:
If you have questions now or at any point during this lesson, please feel free to post them to the Lesson 5 Discussion.
This lesson is worth 100 points and is one week in length. Please refer to the Course Calendar for specific time frames and due dates. To finish this lesson, you must complete the activities listed below. You may find it useful to print this page out first so that you can follow along with the directions. Simply click the arrow to navigate through the lesson and complete the activities in the order that they are displayed.
SDG image retrieved from the United Nations [1]
Land cover data represents continuous measurements from satellites such as Landsat, Sentinel, and MODIS. Raster products derived from satellite data, such as the National Land Cover Dataset (NLCD), are commonly used to study how much of a region is covered by forests, wetlands, impervious surfaces, agriculture, and other land and water types. We used an NLCD dataset updated in 2021 and created in 2019 in Lesson 2. NLCD is a national dataset with information on land cover for a given time period for all areas in the U.S. Each grid cell represents a particular land cover category and was derived from classification algorithms that processed Landsat satellite imagery. Currently, there are eight NLCD datasets [160] that represent 2001, 2004, 2006, 2008, 2011, 2013, 2016, 2019, and 2021 (2021 released in 2023). The NLCD is one of the most commonly used land cover datasets since it is available for such a large area and at multiple time periods.
Land cover change is a common issue that has a wide range of environmental implications. Land cover will be an important driver of climate change in the next century. Some reasons for this are the increase in impervious surfaces and the reduction of agricultural and forested areas, which reduce the uptake of CO2 by plants. By reviewing Figure 1, it is easy to see the effect of urban sprawl, even over just a fifteen-year period. The increased amount of red and pink cells (which represent developed area) you see in the 2016 data highlight the urban sprawl that is taking place. During urban sprawl, areas that were previously covered in forests, grasslands, wetlands, etc., have become developed areas.
The reduction of agricultural and forest land due to the spread of low-density, single-use development into rural areas can increase air and water pollution. The loss of these lands affects human health, biological stability, wildlife habitat, and long-term sustainability. The disappearance of agricultural lands can also impact food security by reducing the amount of available food for the immediate area. This will most likely drive up prices, which will affect the overall economic health of that area. Forest loss will reduce the habitat for native species, which will cause them to encroach in urban areas and possibly result in population reductions or even extinction in that area.
The required readings for this lesson include a research project page, a journal article, a land cover and land use chapter, the NCLD fact sheet, a video, three USGS podcasts, and three Esri Help Articles. The land use change article highlights the importance of tracking land cover changes as they relate to various environmental issues. There are also three Esri articles related to operations we will use in ArcGIS during the Step-by-Step Activity. Although we will explain how to use these tools in the Step-by-Step text, the help topics will provide you with a good overview of what the tools will do when executed.
USGS Land Cover / Land Use Change Research [161]
Conterminous United States Land-Cover Change (1985–2016): New Insights from Annual Time Series [162] (Published: February 2022)
Fact Sheet: National Land Cover Database [163]
FOURTH NATIONAL CLIMATE ASSESSMENT Volume II: Impacts, Risks, and Adaptation in the United States:
Chapter 5, Land Cover and Land-Use Change [164]
JOHN HULT:
Hello everyone. Welcome to this episode of Eyes on Earth. We're a podcast that focuses on our ever-changing planet and on the people here at EROS and around the globe who use remote sensing to monitor and study the health of Earth.
My name is John Hult, and I will be your host for this episode.
If you are a regular listener, you have surely heard us talk about how the Landsat satellite data archive represents the longest continuously collected record of the Earth's surface in existence. You have also heard about how scientists monitor the health of the planet by looking back through that nearly 50-year record to track change. But how can data collected in 1972, by a satellite with 1972 technology, possibly align with data collected yesterday, by a satellite launched 40 years later? The answer, for the most part, is Collections. Landsat Collection 1 saw all that data calibrated to match up as closely as possible to match up as closely as possible across all 7 satellite systems. The work allowed scientists to track points on the surface of the Earth more easily and gave them more confidence in their conclusions.
The Landsat team at EROS has just released Collection 2. An upgrade that improves accuracy and expands access to higher level products like land surface temperature. Collection 2 also makes Landsat data available in a cloud friendly format.
Here with us to talk about Collection 2 is Dr. Chris Barnes a contractor at EROS who supports the Landsat International Cooperator Network. Dr. Barnes, thank you for joining us.
CHRIS BARNES:
Thank you very much. Great to be here.
HULT:
Also joining us is Dr. Christopher BARBER. A remote sensing scientist with the USGS Land Change Monitoring Assessment and Projecting Initiative also known as LCMAP. Dr. BARBER, thank you for joining us.
CHRIS BARBER:
Not a problem. Happy to contribute.
HULT:
Dr. Barnes and Dr. Barber, you both work in remote sensing. You are both names Chris. My guess is you have probably gone to the same conference once or twice. You guys must have had your luggage messed up at the airport at least once, right?
BARBER:
We have had frequent flyer miles mixed up.
HULT:
Oh really? Who was the beneficiary of that?
BARNES:
I'm pleased to say that it was me that took a trip to South America.
HULT:
Oh, nice. Let's get into Collection 2 here. Dr. Barnes, we are going to start with you. Why don't you tell us what the word "collections" means in relation to satellite data. How does a collection help scientists study the Earth?
BARNES:
Absolutely. That's a great question. Back in 2016, USGS released the first Collection, Landsat Collection 1, which was a major shift in the management of the USGS archive. Before that, the Landsat archive was processed based on the most current calibration parameters that were available at the time, or the best known updates. The users would have to spend some time and effort trying to determine where that data came from, what system was used to process it. Not all Landsat instruments were processed using the same product generation system. So, in recognizing these challenges, the USGS worked with the Landsat user community and also with the joint agency USGS-NASA, Landsat Science Team to determine how they could provide a consistent archive of known data quality. The bonus in that is, it would allow users more time to conduct their scientific research using Landsat data.
HULT:
If I can interject here really quickly ... I think what I heard was that in the past, before collections, the newest Landsat data had the best calibration, the best accuracy and all of that, and something from 20 years ago didn't have all of the newest processing and it didn't align with the rest of the data for the most part. There were issues, I guess.
BARNES:
Yes. There was a little disconnect in what is being acquired today as to what was acquired back in the 1970s and 1980s.
HULT:
So in Collection 1, you did that. You did align all of the data as well as possible, is that right?
BARNES:
Absolutely, yes that is right. So all the data going back to 1972 from the days of Landsat 1 all the way to Landsat 8, the most current Landsat that is in orbit. All of that data, over 9 million scenes, have all been processed to the same calibration and validation parameters. So that allows users to go back and forth through the entire Landsat archive and conduct their research knowing that the most up to date parameters have been used to calibrate that data.
HULT:
Now, I want to jump over to Dr. Barber here, because you worked with Landsat data before Collections, as I understand it, in some remote parts of the world. Tell us about that work, and tell us what it was like to work with Landsat data before Collection 1.
BARBER:
Not all the Landsat data that exists was collected by what we call the United States ground station. There's a collection that are called the foreign ground stations. They were scattered in a bunch of countries around the world and each of those started with the U.S. version of software for processing Landsat data, but then they customized to their particular needs and their particular tastes for their local user community. So when we start working in Southeast Asia or South America, and you start working with data from 2, 3, 4 different foreign ground stations, they have all been processed with different versions of the software, using different algorithms coming in different physical formats. You had to change the way you worked with each piece of data depending on when and where it came from.
HULT:
So, before the consolidation, the Landsat Global Consolidation, where all that international Landsat data was moved from those ground stations into the EROS archive and later processed into Collection 1, you were relying on data from ground stations that may have been and still could be processing data differently to serve their own local needs. It almost sounds as though you were working with black and white photography versus colored photography, maybe one image that is zoomed in and the other that's a little wider. You kind of had to cobble all that together. Does that sound right?
BARBER:
That's a very broad, rough analogy, yeah. Even countries right next to each other like Thailand and Indonesia, they aren't right next to each other, but they are close. They would process data differently.
HULT:
Interesting. Well, now they would be looking at, if they were looking at Collection 1, they would be looking at the same processing, the same standards and things would match up a lot easier.
Dr. Barnes, I want to turn back to you on this. How do we do that? I mean, how do we get to a place where we can compare a satellite image from 40 years ago to one collected just yesterday? Talk to us a little bit about the steps involved in making all those pixels align.
BARNES:
Well, all those kudos go to a very intelligent team of calibration and validation engineers both part of the USGS and NASA. Where they are constantly monitoring and looking at the performance of the instruments on board the Landsat spacecraft. They then publish those findings in peer reviewed journal articles. Those then get feedback by people in the calibration/validation community. It all comes down to monitoring on a daily basis how the instrument is performing and what changes, if necessary, need to be applied. That's kind of what's happened in this collection management structure. Any changes needed or observed that need to be applied, in a new collection, they make that call of when those will be implemented. That is exactly what's happened and in part what has triggered a reprocessing event for Collection 2.
HULT:
Collection 1 sounds pretty great. Sounds like you have done all the calibration/validation, they do all this work. They really dig down to make sure they have the right changes, the right alterations, the right fixes. They apply them all the way back through the archive and it sounds like you turned a Betamax video cassette into a DVD. Now we are looking at Collection 2. How much better could it possibly be? Tell us what's new with Collection 2 and what kinds of improvements we're going to see.
BARNES:
That's a great question. Some of the main improvements that the users will be very excited to learn about is the substantial improvement in the absolute geo-location accuracy which is used to do the ground control reference data set. Basically pinpoints, very accurately, the Landsat scene onto the Earth's surface. Not only being able to go back through the Landsat archive, but also improves interoperability with the European Space Agency's Sentinel 2 satellites, which are very similar.
HULT:
And if we can put a finer point on that, that seems like a pretty big deal. We are talking about a point on the Earth's surface. You are talking about each pixel. Each 30 meter by 30 meter pixel of Landsat from further back in the archive, matching up even closer to the front of the archive, because there were times where it was a little bit off even in Collection 1 as I understand it, right?
BARNES:
A little bit, yes. And improving this interoperability allows users to be able to get more frequent observations of the same place on the Earth's surface by pulling in the Sentinel 2 series of satellites alongside Landsat.
HULT:
It's not just lining up Landsat pixels but it's also bringing those pixels closer to alignment with a very similar system to get more observations. Interesting.
BARNES:
Absolutely. Some other major highlight that users will be pleased to hear there are some updated global digital elevation modeling sources that are used in the Collection 2 processing system. Also, for the first time, USGS will be producing global surface reflectance and surface temperature products from the early 1980s will also be distributed as part of Collection 2.
HULT:
And when you say distributed, just to make this clear to the people who are listening, for Collection 1 you could ask for surface reflectance and surface temperature for anywhere in the world, is that right? But now it is just going to be there. If you log into EarthExplorer and search Collection 2, you will just be able to get it, is that right?
BARNES:
That is right. Up until Collection 2 was publicly available, surface reflectance was only available to the user community on demand. The surface temperature products was only available through the U.S. Analysis Ready data, data set for the United States. This time, USGS will be processing and making that available for the entire world.
HULT:
Dr. Barber, tell us a little about LCMAP. My understanding is LCMAP uses Collection 1. Tell us just broadly what LCMAP does and do you see any improvements to LCMAP with Collection 2?
BARBER:
One of the things to think about is, you know, 20 plus years ago, monitoring the land surface with Landsat data was a bit of a challenge because data was expensive, computer storage was expensive. A half a dozen, a dozen or 20 images for your study period was about all you could handle for cost and storage. Some big projects, maybe 100 images, 200 images. Today, data is free, computer storage is inexpensive. So the idea of, "well let's look at all the images for the United States and look at how land cover is changing across the United States and the land surface is changing, using all the inputs" becomes an idea that is cost effective and storage effective so, why not? Before Collections, the problem was there was inconsistent data through that historical record. So if you want to do monitoring over time, especially with any kind of automated method it's really important that you are measuring the same thing and measuring with the same measurements all the time. So for example, if you want to track temperature in your backyard, you aren't going to mix up measurements of, some days you're going to take the temperature in your backyard, some days your front yard and some days in your neighbors driveway, and mix up Fahrenheit and Celsius. You're going to put one thermometer in the backyard with one measurement system to monitor that temperature. That's what Collection 1 allows us to do with LCMAP. To look at the conterminous United States and really look at all Landsat data available and track it through time and analyze land cover change. With Collection 2 coming up, we expect to see some improvements, especially in the geolocations so that thermometer is always in the same place even more precisely, and improvements in the calibration and things like that. I think the advantages of Collection 2 are going to be much more evident in other parts of the world, outside the United States. A lot of the data over the United States has already been, even in Collection 1, was well advanced. At some point in the future there's a potential of taking LCMAP global, and at that point Collection 2 or even Collection 3 or beyond is going to be really invaluable for taking that work forward.
HULT:
So what you do with LCMAP is to look at every pixel back through time, to create these products. That's only possible because of Collection 1. And Collection 2 is going to perhaps improve the results there because of the accuracy, the thermometer issue that you brought up. But it's also going to make better data available to a broader swath of the world, potentially taking this approach from LCMAP and making it possible to do in parts of the world where it maybe wasn't before. There is something else I wanted to ask about with LCMAP. Is there some interplay between Collection 2 and the possibility of including other data sets in the algorithm you have now?
BARBER:
The European Space Agency has program called Copernicus with a satellite called Sentinel-2, which produces some data that is similar to Landsat, and there is work that has been done on how do we take Landsat data and Sentinel data and do what is called harmonize it to make the measurements comparable easily. So we can start to compare them directly. We are in a really rich time for satellite observations compared to 20 years ago or even 10 years ago. So over the next 2, 5, 10 years there's going to be more and more Earth observation satellites up there. So learning how to incorporate different sensors and different space platforms together is the way forward.
HULT:
Right. So you at LCMAP, you are thinking about this stuff-the idea of harmonizing these datasets and incorporating more observations into your work. I know NASA is working on a harmonized Landsat/Sentinel product as well. So broadly speaking, that is where the future is and that is something that this particular improvement in Collection 2 will make easier.
BARBER:
That's it exactly. So going from before Collections to Collection 1 to Collection 2, we are taking data from up to today, 8 different Landsats and making that data all work together so it is easy for researchers to use it all together. The moving forward is, how do we take those lessons and expand it to more satellite systems and different sensors and make that data all usable directly to researchers without having to worry about the engineering stuff in the background.
HULT:
Dr. Barnes, it sounds like Dr. Barber is pretty pleased with the direction we're heading. So congratulations there. Good work for your team. But, I want to talk about something else with Collection 2, which you briefly mentioned. The idea of land surface temperature and surface reflectance, those higher-level products being available right there. What do you think that particular advancement might mean to the world of remote sensing? What kind of research is that going to aid?
BARNES:
Absolutely. The first advantage to the user community is that preprocessing has already been taken care of by the USGS. Hopefully with these being globally available products, people are going to be able to do more extensive research. For example, surface reflectance account for aerosols, water vapor and ozone in the atmosphere and therefore it helps to accurately map changes in the Earth's surface. So applications from all around the world will be able to use looking at the Earth's surface for change and impacts to how the Earth's surface is changing. They will be able to use that product. For the land surface temperature product, because that will also be globally available, people will now start to incorporate that into global energy balance studies. Looking at how the Earth's global energy is changing over time. Looking at hydrological modeling, looking to sort of take crop monitoring and also trying to get an indicator of vegetation health and also it can be used for looking at extreme heat events such as natural disasters, volcanic eruptions, wild fires and also how urban heat islands are propagating through time as global population rises and urban centers continue to gentrify out.
HULT:
We're looking at the possibility of, because it's available right there, being able to sort of automate, if you're the kind of person who does this research, you are going to be able to, if you want to, automate these processes to do some of these analyses without having to make extra requests for one thing, and you're also expanding it globally. We are talking about being able to see whether a particular city in India or Pakistan is hotter now than it was in 1994 and being able to quantify that much more easily. Is that maybe one example?
BARNES:
Absolutely. That would be one of the example applications of the surface temperature product. And to go one step further the fact that USGS moved to making Landsat Collection 2 available in a commercial cloud environment definitely lends its hand to those users who do want to do global-scale or even continental scale analyses using Collection 2. They will be able to bring their algorithms to the data, whereas in the past and what has historically been done, which is what Dr. Barber was referring to earlier, people had to download large volumes of data. That took a lot of time, cost a lot of money. You had to store that data and then run your algorithm on that data locally if you had that capability or transfer it to a place where you will be able to do that. So being accessible in a cloud environment really opens up a plethora of options for the user community to do all kinds of research. And we are really excited to see what this engenders.
HULT:
The idea of putting the data in the cloud and being able to work in the cloud environment, with Landsat data, doesn't it sort of level the playing field for folks who maybe otherwise wouldn't have access to the computing power it would take if they had to download all this data. Is that a fair characterization of one possibility?
BARNES:
Yes. I think that is a very good way of looking at this. USGS has processed the archive to the highest possible standard in the history of the Landsat program and taken it one step further by putting it in this cloud environment to allow what you are eluding to a more even playing field of accessibility to retrieving that data.
HULT:
Right. Because as I understand it, some of the work that LCMAP might do and even further with some of the global monitoring things that are taking place, you need some pretty serious horsepower to do that work. Don't you Dr. Barber?
BARBER:
Indeed. You know it is getting less and less as we move forward. One of the things that is important to remember is that science operates on a budget. And some of that money goes to computational resources and computer storage and time for human resources to do analysis. Pre-Collections, a lot of that time was taken up in just preparing data for analysis. If we can get to the point with Collection 2 where we can get real measurements of surface reflectance and land surface temperature, ready to go for science, that leaves a lot more in our kind of human resources and computational budget to use for actual analysis rather than data prep.
HULT:
That's a good point and gets to something we should probably address here and make clear. The data is being made available in this cloud friendly format but it is not as though the USGS is providing cloud storage, right? Like a person would have to pay for a cloud storage plan, just as they would have had to pay for a computer and an internet connection to download the data before. The data is there and is available in that format. Is that right, Dr Barnes?
BARNES:
That is exactly right. The USGS is making the Collection 2 Landsat archive freely available. There is no change to the 2008 Open Data Policy but users will have to work with the respective commercial cloud providers if they want to do running algorithms on the archive in the cloud, and also exporting the results from running that algorithm in that cloud environment.
HULT:
Just to wrap up here ... is there anything else you would like the world to know? How does it feel to have this job done?
BARNES:
Yes, I think obviously this is the second major reprocessing event USGS has done with the Landsat archive in 4 or 5 years. The major accomplishment with this is the amount of enhancements that have gone into this version of the archive. Not only just improving the quality of the Landsat archive but also new data access and distribution capabilities. And of course the new products of surface reflectance and surface temperature. Another major leap that the USGS took with this is migrating to the cloud environment. That not only means data access and distribution but also the processing of the Landsat archive in the cloud. So it really goes to show you how far the USGS have come in these last 5 years between Collection 1 and Collection 2 of really being able to turn around and produce the most high quality Landsat archive known to date.
HULT:
We've been talking to Dr. Chris Barnes and Dr. Chris Barber about Collection 2 and improvements to the Landsat archive. Doctors, thank you for joining us.
BARNES:
Thanks, John
BARBER:
Thanks, John. Exciting times ahead.
This podcast is a product of the U.S. Geological Survey, Department of Interior.
Find the help articles listed below in the ArcGIS Pro Resources Center [169].
Search for:
Landsat in Action - Land Cover and Land Cover Change with Tom Loveland [173]
Land Change Monitoring, Assessment, and Projection (LCMAP) Data [174]
LCMAP Viewer [175]
USGS How do changes in climate and land use relate to one another? [176]
Causes and Consequences of Climate Change [177] (European Commission, 2015)
Report on Climate Change and Land [178] (World Resouces Institute, 2019)
Impacts of Land Use/Land Cover Change on Climate and Future Research Priorities [179] (Rezaul Mahmood, Roger Pielke, Sr., et al. - AMERICAN METEOROLOGICAL SOCIETY [180])
This section provides links to download the Lesson 5 data along with reference information about each dataset (metadata). Briefly review the information below so you have a general idea of the data we will use in this lesson. You do not need to click on any of the hyperlinks as we will do this in the Step-by-Step Activities.
In this lesson, we will experiment with two different types of data providers, both public and private. For the publicly available data, we will use a combination of online data services and raw GIS files, which you will have to download yourself. The private data is included in the zip file below. Keep in mind, the websites and servers of public data providers may occasionally experience technical difficulties. If you happen to work on this lesson while one of the sites is down, you may need to stop work and start again the following day to allow time for the servers to reboot.
Note: You should not complete this activity until you have read through all of the pages in Lesson 5. See the Lesson 5 Checklist for further information.
Create a new folder in your GEOG487 folder called "L5." Download a zip file of the Lesson 5 Data [181] and save it in your "L5" folder. Extract the zip file and view the contents.
Information about all datasets used in the lesson is provided below:
The Step-by-Step Activity for Lesson 5 is divided into two parts. In Part I, we will look at and obtain a publicly available historical land use dataset from the Pennsylvania Spatial Data Access (PASDA) website. We will also review the 1978 historical land cover data included with the lesson. In Part II, we will standardize the land cover data for analysis. Then we will determine the land cover area per category for each county using the Tabulate Area and Join Tools. Finally, we will calculate the percent change for land cover categories between 1978 and 2005.
Note: You should not complete this step until you have read through all of the pages under the Lesson 5 Module. See the Lesson 5 Overview and Checklist for further information.
In Part I, we will explore and obtain publicly available datasets from the Pennsylvania Spatial Data Access (PASDA ) website. We will also review the private data for this lesson and organize the map for analysis.
When working with raster data that you have downloaded, you need to be careful when placing it on your computer. Many raster datasets have an associated Info folder that contains critical reference information. The files contained within this folder are numerically named based on the particular order in which they were originally created. As a result, it is possible that different raster datasets have identically named reference files within this folder.
It is important to note that although these files may have the same name, they do not contain the same information. Therefore, it is possible to corrupt your data if you overwrite one set of a raster dataset’s files with another’s. You can avoid this potential problem by creating new folders for each dataset and extracting each zip file within its own folder.
What are the largest towns within the study area? Where is the study site in relation to the overall area of Pennsylvania?
Raster attribute tables are different from vector attribute tables. Unlike with vector files, each unique value is only listed once.
Do all of the land cover raster datasets have the same number of coded values? How many unique codes does each raster dataset contain? Are any of the codes the same? Do they have the same extent and cell size? Do all of the datasets have the same spatial reference information?
We want to figure out how land use has changed between 1978 and 2005 for several counties in southeastern Pennsylvania. We are mainly interested in the urbanization of agricultural and forested areas. You may have noticed that the land cover categories and coded values are different for the 1978 and 2005 datasets. Since we are interested in comparing land use change, we will need to standardize these categories before we can compare them. We also want to remove extraneous information from our datasets to make them easier to work with. We will use the Reclassification Tool in Spatial Analyst to perform both of these tasks simultaneously.
We will reclassify both of the input raster data layers using the standardized codes below. Codes 1, 2, and 3 collapse the existing detailed categories into broader categories. The "NODATA" (ALL CAPS) category allows us to ignore all of the land cover categories that we are not using in our analysis.
Value | Category |
---|---|
1 | Developed Land |
2 | Agricultural Land |
3 | Forested Land |
NODATA | All Other Values |
The tables below show the original land cover codes from the 1978 and 2005 land cover grids, associated descriptions, and the new codes we will use to reclassify the data.
Original value | Original Category | NEW Reclass Value |
---|---|---|
11 | Residential | 1 |
12 | Commercial and Services | 1 |
13 | Industrial | 1 |
14 | Transportation, Communications... | 1 |
15 | Industrial and Commercial Complexes | 1 |
16 | Mixed Urban or Built-up Land | 1 |
17 | Other Urban or Built-up Land | 1 |
21 | Cropland and Pasture | 2 |
22 | Orchards, Groves, Vineyards | 2 |
23 | Confined Feeding Operations | 2 |
24 | Other Agricultural Land | 2 |
31 | Herbaceous Rangeland | NODATA |
32 | Shrub and Brush Rangeland | NODATA |
33 | Mixed Rangeland | NODATA |
41 | Deciduous Forest Land | 3 |
42 | Evergreen Forest Land | 3 |
43 | Mixed Forest Land | 3 |
51 | Streams and Canals | NODATA |
52 | Lakes | NODATA |
53 | Reservoirs | NODATA |
54 | Bays and Estuaries | NODATA |
61 | Forested Wetland | 3 |
62 | Non-forested Wetland | NODATA |
72 | Beaches | NODATA |
73 | Sandy Areas other than Beaches | NODATA |
74 | Bare Exposed Rock | NODATA |
75 | Strip Mines, Quarries, and Gravel Pits | NODATA |
76 | Transitional Areas | NODATA |
Original value | Original Category | NEW Reclass Value |
---|---|---|
14 | Roads | 1 |
21 | Row Crops | 2 |
24 | Pasture/Grass | 2 |
41 | Deciduous Forest | 3 |
42 | Evergreen Forest | 3 |
43 | Mixed Deciduous and Evergreen | 3 |
50 | Water | NODATA |
51 | Streams and Canals | NODATA |
52 | Lakes | NODATA |
61 | Forested Wetlands | 3 |
62 | Emergent Wetlands | NODATA |
70 | Bare; Unclassified Urban/Mines, Exposed Rock, Other Unvegetated Surfaces | NODATA |
111 | Residential Land; 5-30% impervious | 1 |
112 | Residential Land; 31-74% impervious | 1 |
113 | Residential Land; 74% < impervious | 1 |
121 | Institutional/Industrial/Commercial Land; 5 - 30% impervious | 1 |
122 | Institutional/Industrial/Commercial Land; 31 - 74% impervious | 1 |
123 | Institutional/Industrial/Commercial Land; 74% < impervious | 1 |
124 | Airports | 1 |
241 | Golf Courses | 1 |
750 | Active Mines/Significantly Disturbed Mined Areas | NODATA |
1111 | Residential Land; 5 - 30% impervious; Deciduous Tree Cover | 1 |
1112 | Residential Land; 5 - 30% impervious; Evergreen Tree Cover | 1 |
1113 | Residential Land; 5 - 30% impervious; Mixed Tree Cover | 1 |
1121 | Residential Land; 31 - 74% impervious; Deciduous Tree Cover | 1 |
1122 | Residential Land; 31 - 74% impervious; Evergreen Tree Cover | 1 |
1123 | Residential Land; 31 - 74% impervious; Mixed Tree Cover | 1 |
1131 | Residential Land; 74% <impervious; Deciduous Tree Cover | 1 |
1132 | Residential Land; 74% <impervious; Evergreen Tree Cover | 1 |
1133 | Residential Land; 74% < impervious; Mixed Tree Cover | 1 |
1211 | Institutional/Industrial/Commercial Land; 5 - 30% impervious; Deciduous cover | 1 |
1212 | Institutional/Industrial/Commercial Land; 5 - 30% impervious; Evergreen tree cover | 1 |
1213 | Institutional/Industrial/Commercial Land; 5 - 30% impervious; Mixed tree cover | 1 |
1221 | Institutional/Industrial/Commercial Land; 31 - 74% impervious; Deciduous Tree Cover | 1 |
1222 | Institutional/Industrial/Commercial Land; 31 - 74% impervious; Evergreen Tree Cover | 1 |
1223 | Institutional/Industrial/Commercial Land; 31 - 74% impervious; Mixed Tree Cover | 1 |
1231 | Institutional/Industrial/Commercial Land; 74% < impervious; Deciduous tree cover | 1 |
1232 | Institutional/Industrial/Commercial Land; 74% < impervious; Evergreen tree cover | 1 |
1233 | Institutional/Industrial/Commercial Land; 74% < impervious; Mixed tree cover | 1 |
After all of the time periods share common land cover codes, we can calculate how much change has occurred in each category over time using the workflow below:
It is important to remember to double-check the environment settings within the Spatial Analyst tool pane, as ArcGIS sometimes ignores the global environment settings. A general rule of thumb is to always be certain of the environment settings used in your analysis, as they are critical to your results.
Notice how the extent setting we used clipped the raster to a much smaller area, and the mask setting we used assigned values of NoData to all of the areas that are both outside our study area boundary and within the extent.
Also, notice the grey areas within our study area. These are places that we reclassified the original land cover to "NoData." Keep in mind that you could also do the opposite of what we did – you can reclassify cells with starting values of "NoData" to other values.
Make sure you have the correct answer before moving on to the next step.
The cell counts in your RC_lu_1978.tif should match the examples below. If your data does not match this, go back and redo the previous step. You can double-check settings and rerun the tool in the Results window.
You’ll need right-click the RC_lu_1978.tif in Contents pane > Attribute table to see the Count attribute.
How did the extent, mask, and cell size settings affect the output raster? You can view the cell size settings by right-clicking on the output raster > Properties > Source > Cell Size.
Make sure you have the correct answer before moving on to the next step.
Your LU_2005_RC grid should match the example below. If your data does not match this, go back and redo the previous step.
Since you know the cell size and number of cells with each unique value, you can easily calculate the total area within each land cover category for the entire study area. Note that you need to use the area of the cell, not the length, when making these calculations.
In the next step, we will use the "Tabulate Area" tool to create a table with the areas of each land cover type within each county. We will repeat this for both time periods. The "Tabulate Area" tool will automatically generate column names based on the values in the input table. Since we will have two datasets with the same land cover codes, we need to be able to keep track of each year’s corresponding table. To do this, we will add new fields to each reclassified raster attribute table and populate them with a combination of the study year and the land cover code.
Make sure you have the correct answer before moving on to the next step.
Your reclassified attribute tables should have their ID values populated as shown below. If your data does not match this, go back and redo the previous step.
OID | Value | Count | LU | ID |
---|---|---|---|---|
0 | 1 | 3762517 | DEV | 1978_Dev |
1 | 2 | 16778194 | Agr | 1978_Agr |
2 | 3 | 11264313 | For | 1978_For |
OID | Value | Count | LU | ID |
---|---|---|---|---|
0 | 1 | 6730640 | Dev | 2005_Dev |
1 | 2 | 10679480 | Agr | 2005_Agr |
2 | 3 | 14529196 | For | 2005_For |
Now that we have reclassified the land cover data with standardized categories and created unique IDs, we can begin our land use change analysis. We need to calculate the area for each of the three land cover categories within each county for each time period. To do this, we will use the "Tabulate Area" tool. This tool calculates cross-tabulated areas between two datasets. This tool summarizes one dataset within regions specified by a second data set.
Open the "TA_1978.dbf" table in your map. Notice the names of the columns. What are the units of the tabulated areas?
Make sure you have the correct answer before moving on to the next step.
Your tabulated area tables should match the examples below. Both of the tables should have 19 records and 5 columns. If your data does not match this, go back and redo the previous step.
OID | FIPS_CODE | A_1978_Dev | A_1978_AGR | A_1978_FOR |
---|---|---|---|---|
0 | 025 | 60044775.0275 | 140076533.603 | 764621228.308 |
1 | 029 | 283115147.234 | 1221605255.57 | 451162509.312 |
2 | 041 | 108895573.345 | 864337176.226 | 448917213.30 |
3 | 043 | 142616070.306 | 607690006.007 | 604278011.426 |
4 | 071 | 187182416.169 | 1895813331.92 | 374069736.292 |
OID | FIPS_CODE | A_2005_Dev | A_2005_AGR | A_2005_FOR |
---|---|---|---|---|
0 | 025 | 85861467.5195 | 101130340.659 | 771302030.31 |
1 | 029 | 557661328.688 | 673340415.676 | 659276515.52 |
2 | 041 | 239732484.125 | 630922677.737 | 531524170.633 |
3 | 043 | 236418388.277 | 400440924.138 | 703767243.941 |
4 | 071 | 433833969.022 |
1367605682.98 |
608866798.469 |
We will use the Join function to create a "master table” that contains the information from both of the Tabulate Area tables and the attributes of the counties. Since a joined table contains only virtually referenced information, we will export this dataset, thus permanently saving the joins.
Make sure you have the correct answer before moving on to the next step.
Your attribute tables should match the examples below. If your data does not match this, go back and redo the previous step.
FID | county_Nam | FIPS_code | A_1978_Dev | A_1978_agr | A_1978_for | A_2005_Dev | A_2005_agr | A_2005_for |
---|---|---|---|---|---|---|---|---|
0 | Carbon | 025 | 60044775.027 | 140076533.603 |
764621228.308 |
85861467.5195 |
101130340.659 |
771302030.31 |
1 | Chester | 029 | 283115147.23 | 1221605255.57 | 451162509.312 | 557661328.68 |
673340415.676 |
659276515.52 |
2 | Cumberland | 041 | 108895573.349 |
864337176.226 |
448917213.304 | 239732484.125 | 630927677.737 | 531524170.633 |
3 | Dauphin | 043 | 142616070.306 | 607690006.007 | 604278011.426 | 236418388.277 | 400440924.138 | 703767243.941 |
4 | Lancaster | 071 | 187182416.169 | 1895813331.92 | 374069736.292 | 433833969.022 | 1367605682.98 |
608866798.469 |
5 | York | 133 | 144054770.453 |
1585057521.64 |
609945566.225 |
451855625.63 |
1025137908.13 |
837911659.598 |
6 | Philadelphia | 101 | 322253436.539 | 9576897.87968 | 4153583.7766 | 275763193.022 | 18073754.4639 | 31790615.6787 |
7 | Lebanon | 075 | 70680207.6218 | 582750540.97 | 277862437.60 | 134141054.505 | 423053914.346 | 364983291.654 |
Sometimes your calculated values will have too many digits to be stored in a long integer field. In these situations, you can use a data type of "float" instead.
As we saw in Lesson 2, it is much easier to compare numbers using percent areas vs. calculated areas. In this step, we are going to calculate the percent change within each land use type between 1978 and 2005.
([ tot land use in later time] - [ tot land use in earlier time]) / [TotAreaSqm]) * 100
Make sure you have the correct answer before moving on to the next step.
Your calculated values should match the example below. If your data does not match this, go back and redo the previous step. I have only included the values for Adams County. You may need to sort your results to find this county.
Create a map layout with the 4 map frames below. (Note: You will not turn in these maps. However, you will need to consult them to complete the Lesson 5 Quiz).
In Lesson 5, we used the Reclassify Tool to collapse complex categories into simpler versions. We also used it to eliminate portions of our starting data that we did not need for our analysis using the "NoData" code. Can you think of any other ways you could use this tool?
That’s it for the required portion of the Lesson 5 Step-by-Step Activity. Please consult the Lesson Checklist for instructions on what to do next.
Try one or more of the optional activities listed below.
Advanced Activities are designed to make you apply the skills you learned in this lesson to solve an environmental problem and explore additional resources related to lesson topics.
In the Step-by-Step portion of the lesson, we were mainly concerned with three land cover categories: developed, agriculture, and forest. We are also interested in how wetlands have changed between 1978 and 2005. We are particularly interested in the land cover categories below:
We would like to figure out the following:
Note: You may want to read the related quiz questions within the Lesson 5 Quiz before completing the activity so you know what information to look out for.
In Lesson 5, we determined land use change between 1978 and 2005 using land cover datasets from two different sources. We explored how standardizing data can be useful in comparing different, yet similar, datasets by utilizing reclassification tools. Then we calculated percent differences by determining the percent area for each land cover category and combining this information into one table using simple math.
Lesson 5 is worth a total of 100 points.
If you have anything you'd like to comment on, or add to the lesson materials, feel free to post your thoughts in the Lesson 5 Discussion. For example, what did you have the most trouble with in this lesson? Was there anything useful here that you'd like to try in your own workplace?
This page includes links to resources such as additional readings, websites, and data related to the lesson concepts. Feel free to explore these on your own. If you would like to suggest other resources for this list, please send the instructor an email.
You have been hired by a local landowner to calculate the carbon sequestration potential of a small forested area in southeastern Michigan. You know it is possible to estimate carbon values using measurements of tree height and diameter. After an initial site visit to the property, you determine it will be too costly and time-consuming to measure every single tree on the property. Given these limitations, you decide to use a representative sample to estimate values for the entire forest. After setting up a sampling plan, you collect information in the field for 18 sample areas. After returning to the office, you enter your data into two CSV files, one with tree measurements and plot identification numbers and another with plot GPS coordinates. You get to use ArcGIS and the Spatial Analyst extension to create a plot shapefile from your tabular data and interpolate your sample data for the entire forest.
At the successful completion of Lesson 6, you will have:
If you have questions now or at any point during this lesson, please feel free to post them to the Lesson 6 Discussion.
This lesson is worth 100 points and is one week in length. Please refer to the Course Calendar for specific time frames and due dates. To finish this lesson, you must complete the activities listed below. You may find it useful to print this page out first so that you can follow along with the directions. Simply click the arrow to navigate through the lesson and complete the activities in the order that they are displayed.
SDG image retrieved from the United Nations [1]
Calculating carbon sequestration and associated carbon credits for forests is an initiative related to climate change. Climate change, also known as global warming, is caused by increased levels of greenhouse gases trapped in Earth’s atmosphere. Some of the expected effects of global warming include melting glaciers, sea-level rise, changes in water resources, changes in food production, loss of biodiversity, increases in extreme weather, and threats to human health.
Forests play an important role in climate change due to the fact that trees naturally absorb and release carbon dioxide during their life cycle. During photosynthesis, they remove carbon dioxide from the air and store it as organic matter in their trunks, branches, foliage, roots, and soils. This process is known as carbon sequestration. When trees decay or burn, they release their stored carbon back into the atmosphere. The amount of carbon a particular tree absorbs or releases throughout its lifecycle is negligible. However, due to their global abundance, the cumulative effect is very large.
The Intergovernmental Panel on Climate Change (IPCC) [194], winner of the 2007 Nobel Peace Prize, is the United Nations body for assessing the science related to climate change and, therefore, often considered the world’s leading authority on climate change. One of their recommendations is that we need to mitigate the future impacts of climate change by reducing current and future emissions. One way to reduce emissions is to reduce the number of forests that are clear-cut or degraded since these activities account for about 15% of global greenhouse gas emissions. There are simply some existing natural environments that we cannot afford to lose due to their irrecoverable carbon reserves. And, preserving existing forests is considered a much better alternative to reforestation or afforestation, as it takes decades for a new tree to grow and absorb the amount of carbon that is released when a mature tree is lost.
There are several initiatives to encourage the preservation of existing forests on a global scale. Reduced Emissions from Deforestation and Forest Degradation (REDD+) [195] is a framework by the UNFCCC (United Nations Framework Convention on Climate Change) Conference [196] that guides activities that will provide financial compensation to landowners for maintaining and protecting forests. To receive compensation under the REDD+ initiative, landowners need to be able to assess the amount of carbon that is stored in the trees on their property. Many of the methods to do this require a forest inventory in which tree species, height, and diameter at breast height (DBH) are measured. This information is used to estimate the volume of organic matter for each tree, which is then translated into results-based financing, whose value fluctuates depending on the current market trading value of carbon.
While it would be more accurate to measure every tree in a forest during field inventories, limitations of time and money typically make this unfeasible. This is especially true for large or hard-to-access areas (e.g., mangrove forests, swamp forests, forests with steep topography). Therefore, a common practice has been to collect a representative sample and then interpolate the values for the entire study area or use remotely sensed data. In this activity, we will concentrate our efforts on a representative sample. Forests can be inventoried by demarcating a number of sampling locations known as plots. Trees are only measured if they fall within the plot boundaries. The number and location of plots required for a given area depend on the size of the forest and the amount of variation within the study area. Large forests or forests with a lot of variation in tree cover will require more plots than small forests or forests with uniform tree cover.
GIS can be very helpful when trying to decide on the number and location of sample plots. For example, you can overlay your site boundary with current and historical aerial photos to look for variations in forest cover. You can also incorporate other datasets such as Digital Elevation Models (DEMs), hydrology, parcel boundaries, and roads. These datasets will help you identify possible hazards in the field such as fences, large rivers, steep terrain, etc. They can also help you estimate the age of the forest, depending on how old the forest is and how far back you can find aerial photos.
Most of these concepts are not unique to forest inventories. It is actually quite common in the environmental field to use representative samples to understand larger areas. For example, environmental consultants typically install monitoring wells to understand how groundwater and soil conditions vary across a site. By measuring water levels in the monitoring wells, they are able to calculate the direction and speed of groundwater flow for the whole site. They also collect groundwater and soil samples at these locations to determine if pollution levels exceed legal limits. Since the locations of the samples are known, it is possible to plot them on a map and use their location to predict values between monitoring wells.
After data is collected in the field, it is common to enter the data into an electronic format such as an Excel table, CSV file, or a simple database. As long as the field measurements are in a digital format, it is possible to view them in ArcGIS. Depending on the native file type, you may need to complete some intermediate processing steps for ArcGIS Pro to recognize them. Using the "Join" and "Display XY Data" tools in ArcGIS, it is possible to create shapefiles from tabular data. You can then use your shapefiles to identify spatial patterns in your data and create maps of your results.
The interpolation tools available within the ArcGIS Spatial Analyst Extension are particularly helpful when working with representative point data. In Lesson 6, we will practice interpolating point data to estimate values for areas that were not actually sampled. We will also explore how the options and environment settings listed below affect the output grids created by Spatial Analyst tools. We will read more about these settings in the help articles listed in the required readings section.
The required readings for Lesson 6 are listed below. Each of the short articles provides specific information about the tools and techniques we will use in Lesson 6.
Find the help articles listed below on ArcGIS Pro Resources Center [143] website.
This section provides links to download the Lesson 6 data along with reference information about each dataset (metadata). Briefly review the information below so you have a general idea of the data we will use in this lesson.
Note: You should not complete this activity until you have read through all of the pages in Lesson 6. See the Lesson 6 Checklist for further information.
Create a new folder in your GEOG487 folder called "L6." Download a zip file of the Lesson 6 Data [204] and save it in your "L6" folder. Extract the zip file and view the contents. Information about all datasets used in the lesson is provided below:
The data we will use In Lesson 6 was collected by the International Forestry Resources and Institutions (IFRI) Organization [205]. IFRI is a research network made up of 18 collaborating research centers around the globe. Since 1992, IFRI researchers have collected both ecological and social field data for over 400 sites in 15 countries. For this lesson, we are going to use a subset of their data to calculate the carbon sequestration and carbon credits for a small forest located in southeastern Michigan.
The data was collected by laying out 18 circular plots, every 10 meters in diameter, at random locations throughout the study forest. The coordinates of these locations were determined in advance using GIS. In the field, students used GPS and mobile apps to navigate to the middle of each plot, lay out the circular plot, and collect attribute information about the trees. Any trees that fell inside the plot boundaries were measured to obtain their height and diameter. Measurements were collected for a total of 278 trees.
The Step-by-Step Activity for Lesson 6 is divided into two parts. In Part I, we will create a point shapefile from our starting data tables. We will then use the field calculator to calculate the carbon sequestration for each tree and create totals by plot. In Part II, we will use the Spatial Analyst extension tools to interpolate the plot data to a raster grid covering the entire study area. We will use the results to calculate the total carbon for the study forest. During interpolation, we will experiment with several different toolbar settings to see how they affect the results.
Note: You should not complete this step until you have read through all of the pages in Lesson 6. See the Lesson 6 Checklist for further information.
In Part I, we will create the shapefile we will use to interpolate our data (a point shapefile of plots with the total carbon as an attribute). To create this, we start with the two CSV files "GPS.csv" and "Tree _Measurements.csv".
How far is the study forest from the city of Ann Arbor, MI or State College, PA? What is the surrounding land used for (commercial, agriculture, residential, etc.)?
Make sure you have the correct answer before moving on to the next step.
Check the Properties > Source Tab > Spatial Reference to make sure the Plot shapefile was projected correctly to NAD 183 UTM Zone 16N. If your projection doesn’t match, make sure you remove the base maps, and choose the coordinate system of the Map.
Make sure you have the correct answer before moving on to the next step.
Check the location of your plots by comparing your plot shapefile to the map below. Note: Your map will not look exactly like this by default. I changed the symbology of the points, added labels of Plot ID's, and added the Imagery layer in the background to make it easier to compare your data to the example. If you add the imagery base map again, make sure you remove it from your map and Save before moving on to the next step.
We are going to use a somewhat general set of equations to estimate the carbon stored in each tree. For this lesson, we do not need a high level of accuracy. The important part is to demonstrate the concept of how one can calculate carbon credits using GIS. You can read more about the method we will use at: How to calculate the amount of CO2 sequestered in a tree per year [207].
There are more sophisticated methods you can use that take into account the tree species, age, climate, and other factors. The paper, “Methods for Calculating Forest Ecosystem and Harvested Carbon with Standard Estimates for Forest Types of the United States [208]” highlights an example of a more complex methodology. An example of a simpler method is highlighted in the “Landowner’s Guide to Determining Weight and Value of Standing Pine Trees [209]”.
Variable | Description | Units | Equation |
---|---|---|---|
D | Measured tree diameter (DBH) | Inches | See Tree Measurements Table (be careful with your units here). |
H | Measured tree height | Feet | See Tree Measurements Table (be careful with your units here). |
Wa | Total above-ground weight of the tree (w/o roots) | Pounds | Wa = 0.15D2 *H |
Wt | Total weight of the tree and roots | Pounds | Wt = 1.2 Wa |
Wd | Dry weight of the tree | Pounds | Wd = 0.725Wt |
Wc | Weight of carbon in the tree | Pounds | Wc = 0.5Wd |
Ws | Weight of carbon dioxide sequestered in the tree | Pounds | Ws=3.6663Wc |
Make sure you have the correct answer before moving on to the next step.
Compare your data with the summary statistics below for the "Ws" variable.
Mean | 801.0505393089 |
---|---|
Median | 337.511190219 |
Std. Dev. | 1,171.7755087661 |
Count | 278 |
Min | 0 |
Max | 6,748.03406916 |
Sum | 222,692.04992788 |
Nulls | 0 |
Skewness | 2.6994064237 |
Kurtosis | 10.3340354971 |
If your data does not match this, go back and redo your calculations. Pay special attention to unit conversations (make sure to round to the nearest 4 decimal places), data types of the fields you used, and typos in equations.
Make sure you have the correct answer before moving on to the next step.
Compare your data with the summary statistics below for the "c_lbsqm" variable.
Mean | 157.6023000198 |
---|---|
Median | 109.55277128 |
Std. Dev. | 162.8283983546 |
Count | 18 |
Min | 6.420046445 |
Max | 646.682030534 |
Sum | 2,836.8414003566 |
Nulls | 0 |
Skewness | 1.5234771786 |
Kurtosis | 5.4280093282 |
If your data does not match this, go back and redo your calculations.
In Part II, we will use the Spatial Analyst extension tools to interpolate the carbon sequestration data we calculated for each plot to the entire forest. We will run the same interpolation tool several times to see how altering the extent, mask, and cell size settings affect the results. We will start by accepting all default settings. Then we will change the settings one at a time to see how each one affects the results.
Do some plots have more trees than others? Is there a lot of variation in the total amount of carbon or carbon per square meter value? If so, why do you think this may occur? Hint: Look at an aerial image basemap.
Do you see any spatial patterns in the data? For example, do some areas of the forest have higher values than others? If so, why do you think this may occur?
Remember from the Background Information section that the Spatial Analyst tools are governed by user-specified settings. Two of the most common errors when using Spatial Analyst tools are to either completely ignore these settings, or to set them improperly. Let’s try to interpolate our data using all of the defaults and see what our results look like.
Make sure you double-check ALL environment settings before running ANY tools in Spatial Analyst! The program often resets your cell size, extent, and mask to program or data layer defaults.
Click the Show Help >> button to help define particular input parameters.
You can review the specific input and environment settings you used in the Analysis tab, Geoprocessing group > History. This can be helpful if you are not sure if you made a mistake somewhere along the way during a complex workflow.
Make sure you have the correct answer before moving on to the next step.
If your map does not match the example below, go back and redo the previous step.
What is the default setting for analysis extent?
What is the cell size of the "default" raster we created? Why?
Raster Attribute Tables
You may notice that the option to open the attribute table of the "default" raster is grayed out. ArcGIS Pro only builds raster attribute tables if certain conditions are met. One of the conditions is that the values in the raster have to be integers. Since the values in our raster have decimals, it is not possible to view the attribute table.
Now that we’ve explored the default settings, let’s see what happens if we alter just the extent settings. Unlike vector files, rasters will always have the shape of a perfect rectangle. The size and location of the rectangle is defined by its extent.
Make sure you have the correct answer before moving on to the next step.
If your map does not match the example below, go back and redo the previous step.
How do the extents of the "parcel_extent," "forest_extent" and "default" rasters compare?
Now, let’s see what happens if we alter the mask and extent settings. Even though all rasters are defined as perfect rectangles, you can still represent your data as a sinuous shape. The computer creates this illusion by assigning cells outside the sinuous shape values of "NoData." There is not a direct equivalent to this concept in vector files.
What would the output raster look like in the following scenario?
In Step 3, we learned that the default cell size will depend on the input data. If you are using one or more rasters as inputs, the cell size will default to the coarsest raster resolution. If you are using a vector file, it will calculate the cell size based on the extent of the file to create 250 cells. The default for rasters seems appropriate since GIS best practices dictate that you should always go with the cell size of your coarsest input dataset. However, the default for vector files is quite arbitrary.
How do we choose a more meaningful cell size for our analysis? One rule of thumb is that you don’t want to "create" higher resolution data than what exists in your measured values. We know that the tree data was collected by measuring trees that fell within 10 m diameter circular plots. A cell size of 1 cm would not be appropriate, because we do not know how the data varies at that scale. A cell size of 1,000 m would be too large, since it is larger than our study area. For this project, we will use a cell size of 1 m, since our carbon values are in pounds per square meter.
Now that we have an understanding of how the spatial analyst environment settings function, we can return to our original question. We want to figure out how much carbon the study forest sequesters. To accomplish this, we will use the "Zonal Statistics" tool in Spatial Analyst. This tool allows us to calculate statistics of the cell values of one raster (e.g., carbon1m) within zones specified by another file (e.g., forest boundary). We will use it to sum the carbon values in each cell to create a total for the entire forest.
ArcGIS may not show all the digits in a table by default. If your numbers do not match the numbers in the quiz, expand the columns in your table to display all the digits.
The monetary value of each carbon credit fluctuates based on the current market conditions. Check out the more information about the California’s cap and trade system at Center for Climate and Energy Solutions(C2ES) [211].
One of the main take away points from this lesson is that Spatial Analyst is a modeling tool. Models don’t give exact final answers; rather they give you estimates of reasonable answers based on a set of assumptions.
Environments settings allow you to easily alter the underlying assumptions of your model (cell size, mask, extent) and then quickly recalculate your results.
Selecting environment settings in Spatial Analyst tools can be confusing and seem somewhat arbitrary. If you don’t know which Environments setting you should use for a particular scenario, you can try experimenting with a variety of options. This type of sensitivity analysis will help you understand how changing model assumptions affect your final results.
That’s it for the required portion of the Lesson 6 Step-by-Step Activity. Please consult the Lesson Checklist for instructions on what to do next.
Try one or more of the activities listed below:
Note: Try This! Activities are voluntary and are not graded, though I encourage you to complete the activity and share comments about your experience on the lesson discussion board.
Advanced Activities are designed to make you apply the skills you learned in this lesson to solve an environmental problem and/or explore additional resources related to lesson topics.
After looking at current aerial photos of the site, you realize that the density of tree cover varies across the study area. In fact, there are many parts of the study area that are not covered with trees. Historical aerial photos reveal that the forest is quite young and that many parts of the study area were used for agriculture until quite recently. Using aerial photos from the 1940s to the present, you create a shapefile showing areas of similar age and forest type (Vegetation07.shp in your L6 folder).
Given this new information, you realize that your original estimate of carbon sequestered by the forest is too large since many of the interpolated cells are located within areas that are not covered with trees. You decide to redo your interpolation to remove areas that are not forested from your results. What is your new estimate of carbon sequestered by the study forest?
In Lesson 6, we talked about climate change, forests, and carbon credits. We explored how to use GIS to plot and interpolate representative sample data measured in the field. We also explored how altering the extent, mask, and cell size settings within Spatial Analyst lead to very different results. We learned that you need to select settings appropriate for your analysis since accepting the defaults can have unintended consequences. In the next lesson, we will look at two different Spatial Analyst tools: "reclassify" and "tabulate area" to investigate land use change over time.
Lesson 6 is worth a total of 100 points.
If you have anything you'd like to comment on, or add to the lesson materials, feel free to post your thoughts in the Lesson 6 Discussion. For example, what did you have the most trouble with in this lesson? Was there anything useful here that you'd like to try in your own workplace?
This page includes links to resources such as additional readings, websites, and videos related to the lesson concepts. Feel free to explore these on your own. If you would like to suggest other resources for this list please send the instructor an email.
You have been hired by a conservation group to determine how selective logging practices have changed a rainforested area in the Congo Basin of Central Africa. You must use ArcGIS and Spatial Analyst to determine the number of forest fragments that have been created by logging roads. You also need to characterize the habitat quality of each forest fragment in terms of the ratio of interior to edge habitat, the edge to area ratio, the thickness, and the overall area.
At the successful completion of Lesson 7, you will have:
If you have questions now or at any point during this lesson, please feel free to post them to the Lesson 7 Discussion.
This lesson is one week in length and is worth a total of 100 points. Please refer to the Course Calendar for specific time frames and due dates. To finish this lesson, you must complete the activities listed below. You may find it useful to print this page out first so that you can follow along with the directions. Simply click the arrow to navigate through the lesson and complete the activities in the order that they are displayed.
SDG image retrieved from the United Nations [1]
Tropical rainforests are extremely valuable in terms of the ecological services they provide, such as biodiversity and carbon sequestration. Although they cover only 6% of the earth's surface, they provide habitat for over half of the plants and animals in the world. Many of these plants and animals are threatened or endangered species. Rainforests are also highly prized for their commercial hardwood trees. The trees are cut down and processed to create products such as teak and mahogany furniture, plywood, and flooring.
Historically, there are two main types of logging practices used in tropical forests: clearcutting, and selective logging. During clearcutting, loggers remove all of the trees in a given area, leaving large clearings in their wake. These clearings reduce the amount of usable habitat for plants and animals. You can view an example of clearcutting in Brazil by looking in Google Maps [226]. Notice the large grey patches in the images. These are areas of the forests that were cleared of all vegetation. Without vegetation to stabilize the soil, winds, and rain quickly erode the nutrient-rich soils required for new species to colonize the area. Clearcutting is a very environmentally destructive process. While on Google Maps, be sure to zoom out a little, and you will see the fishbone pattern that is characteristic of logging in tropical forests. Also, check out the article "Roads could help protect the environment rather than destroy it, argues Nature Paper [227]."
In contrast, only species of value are extracted from the forest during selective logging. It seems like this process would be much more environmentally friendly, considering that much of the logged area remains forested. However, it is also a destructive process because it opens up previously inaccessible areas to human exploitation, damage, and degradation. Once logging roads are built, they tend to be used for many other activities. For example, bushmeat hunters use new roads to extract and transport illegal forest products such as monkeys, gorillas, and chimpanzees. Migrating people often travel along these roads and establish new villages. Once settled, they tend to clear surrounding areas of the forest for agriculture using slash-and-burn techniques.
As loggers build new roads, they break up large tracts of forest into progressively smaller areas or “patches.” This process is known as "forest fragmentation." Scientists use a quantity called the "edge to area ratio" to characterize forest fragments. The measurement, calculated as the perimeter of forest/area of forest, represents the complexity of the shape of each forest patch. The higher the value, the more irregular the forest boundary.
In addition to breaking up forests into smaller patches, road building activities also increase disturbances known as "edge effects." Some examples of edge effects include changes in species composition, diversity, and seed dispersion, increased tree mortality and susceptibility to fires, microclimate shifts (humidity and sunlight), increased carbon emissions, and impeding movement of animals. Scientists have observed edge effects up to 2 km from road edges.
Logging activities can have a significant impact on the local ecosystem since the smaller forest patches do not provide the same quantity and quality of habitat as large tracts. As new roads are built, fragments of forests are further degraded as the ratio of interior habitat to edge habitat decreases. Native animal species of tropical rainforests can require blocks of interior habitat greater than 1,000 sq km. Large mammals and species under hunting pressure can require interior areas of at least 10,000 sq km. To create large areas of interior habitat, care must be taken to limit road building activities to certain areas.
In Lesson 7, we are going to examine the effects of historical commercial logging activities for a forested area in southeastern Cameroon. The study area is part of the Congo Basin, which contains the world's second-largest concentration of tropical rainforests. The study area boundary encompasses two main types of land management areas: protected areas and logging areas. The Lobeke National Park is the main protected area within the study site. Using GIS datasets showing road centerlines, we will use ArcGIS Spatial Analyst to quantify the fragmentation and edge effects within the study area.
There are three types of required readings for Lesson 7: a short video describing issues facing forests and the conservation group - World Resources Institute (WRI) that created the data we will use in the lesson, background information about the study area, and Esri Help Topics related to the GIS tools we will use in the lesson.
Note: If you do not see the embedded video below, try clicking the refresh button on your browser, try viewing this page in another browser, or click on the hyperlink listed as the source.
Find the help articles listed below in the ArcGIS Pro Resources Center [143].
Search for:
This section provides links to download the Lesson 7 data along with reference information about each dataset (metadata). Briefly review the information below so you have a general idea of the data we will use in this lesson.
Note: You should not complete this activity until you have read through all of the pages in Lesson 7. See the Lesson 7 Checklist for further information.
Create a new folder in your GEOG487 folder called "Lesson7." Download a zip file of the Lesson 7 Data [238] and save it in your "Lesson7" folder. Extract the zip file and view the contents. Information about all datasets used in the lesson is provided below:
The data for this lesson is contained in a geodatabase called Lesson7.gdb. Read about geodatabases in the online help provided by Esri if you are not familiar with this data format.
The roads and study boundary were sourced from portions of the Interactive Forestry Atlas [239], Global Forest Watch Open Data Portal [240], Humanitarian Data Exchange [241], and OpenStreetMap [242] for our study area. The original data was created by the World Resources Institute - Global Forest Watch [243] (GWF), a nongovernmental organization (NGO) focused on environmental issues. They regularly produce reports and data about the state of forests and logging in Central Africa and other locations around the world. Part of their work involves the creation of GIS datasets to assist forest managers.
In Part I, we will review the historical data and organize the map for analysis. In Part II, we will use the roads dataset to create rasters of habitat quality and forest patches. In Part III, we will generate statistics about the size, shape, and habitat quality of each forest patch. We will also generate statistics of habitat quality by land management type (conservation vs. logging areas). In Part IV, we will share our analysis results using ArcGIS Online.
Note: You should not complete this step until you have read through all of the pages under the Lesson 7 Module. See the Lesson 7 Checklist for further information.
In Part I, we will review the data and organize the map for analysis.
Since all of the datasets used in this lesson have the same projection, we do not have to be concerned with the order in which we load the data.
How many of the management units are used for logging? What about conservation?
Using the "Imagery Hybrid" layer, can you see the approximate extent of the rainforests located in the Congo Basin? What kind of details can you see in the forest if you zoom in very close?
In Part II, we will use the roads dataset to create raster data layers of habitat quality and forest patches. In Part III, we will generate statistics about the size, shape, and habitat quality of each forest patch.
We will use the following coded values:
When you convert a feature layer to a raster, you have to choose a field in the feature layer from which to base the grid cell values on. You often need to create a new dummy field and assign a value that is consistent for all of the records you want to convert (like we did above).
It is also important to note that if there are any selected records in the vector layer, only those records will be converted to a raster layer. Therefore, be sure to clear any selected features before performing the conversion.
The data type of the field you choose is very important. For example, if you choose a numerical field that contains decimal values, the resultant grid will not have an attribute table. However, if you choose an integer field, the resultant raster will have an attribute table. If you choose a text field, ArcGIS will automatically assign each unique text value an integer code in a new field named "VALUE."
The new raster layer will be created based on all defined Spatial Analyst environment settings. Always check these settings before converting features to a raster to avoid potentially undesirable results.
It is important to note that although the extent setting is utilized by Feature to Raster, the mask setting is ignored. Although you will not notice this with the "RoadsGrid.tif" layer, you will see the effects of this when you create a buffered grid later in this lesson.
Make sure you have the correct answer before moving on to the next step.
The "RoadsGrid.tif" raster should have the following information. If your data does not match this, go back and redo the previous step.
Remember from the Background Information section that edge effects can occur up to 2 km from roads. We will consider all areas 2 km from roads as "edge habitat" and areas farther than 2 km from roads as "interior habitat." To do this, we need to create a buffer of the road centerlines.
Make sure you have the correct answer before moving on to the next step.
The "EdgeGrid" raster should have the following information. If your data does not match this, go back and redo the previous step.
Did the Reclassify Tool honor the mask and extent settings?
Hint: Compare the InteriorGrid.tif and EdgeGrid.tif rasters along the study area boundary.
Make sure you have the correct answer before moving on to the next step.
The "InteriorGrid.tif" grid should have the following information. If your data does not match this, go back and redo the previous step.
In steps 1, 2, and 3, we created three individual grids, one for each level of habitat quality. To continue the analysis, we need a way to merge all of the data sets into one grid. The Mosaic to New Raster tool in Toolboxes will allow you to mosaic multiple raster data layers together by stacking them on top of one another. The values in the output raster will be determined based on the order the files are specified during the mosaic. Cells will first be assigned according to the cell values in the first input raster; all remaining null values will be filled in with the middle input raster, and so on. We want the roads to be on top of the stack, the edge habitat in the middle, and the forests on the bottom.
This tool does not honor the Output extent environment settings. If you want a specific extent for your output raster, consider using the Clip tool. You can either clip the input rasters prior to using this tool, or clip the output of this tool.
Make sure you have the correct answer before moving on to the next step.
The "HabMosaic" raster should have the following information. If your data does not match this, go back and redo the previous step.
What value was assigned to areas with roads, since they have data in both the "RoadsGrid" and "EdgeGrid" rasters?
Which habitat type (roads, edge, or interior) covers the majority of the study area?
How can you calculate the area of each habitat type?
The Raster Calculator utilizes all raster environment settings, so it is highly useful when working with raster data. As displayed above, simply selecting a raster layer and running the Raster Calculator will generate a new raster layer based on the current environmental settings. Try changing these settings to see the differences when running the Raster Calculator on a particular raster layer.
Make sure you have the correct answer before moving on to the next step.
The "HabitatGrid" raster should have the following information. If your data does not match this, go back and redo the previous step.
We now have one grid with values showing the range of habitat quality within the study area. The next step is to create a grid of forested areas, which we need to create the forest fragments. We will use the "RoadsGrid.tif" raster we created in Part II Step 1 to create a new grid representing forested areas (cells that are NOT roads).
Make sure you have the correct answer before moving on to the next step.
The "ForestGrid.tif" raster should have the following information. If your data does not match this, go back and redo the previous step. You may need to adjust for the Mask and Processing Extent here as well.
Make sure you have the correct answer before moving on to the next step.
The "ForestPatches.tif" grid should have the following information. If your data does not match this, go back and redo the previous step.
OID | Value | Count | link |
---|---|---|---|
0 | 1 | 64201 | 1 |
1 | 2 | 58 | 1 |
2 | 3 | 122867 | 1 |
3 | 4 | 19 | 1 |
4 | 5 | 30 | 1 |
Why did we use the number "100" to calculate the area?
Make sure you have the correct answer before moving on to the next step.
The "ForestPatches.tif" grid should have the following information. If your data does not match this, go back and redo the previous step.
oid | value | count | forestid | area_sq |
---|---|---|---|---|
0 | 1 | 64201 | 1 | 642010000 |
1 | 2 | 58 | 2 | 580000 |
2 | 3 | 122867 | 3 | 1228670000 |
3 | 4 | 19 | 4 | 190000 |
4 | 5 | 30 | 5 | 300000 |
5 | 6 | 1 | 6 | 10000 |
6 | 7 | 13 | 7 | 130000 |
7 | 8 | 1 | 8 | 10000 |
8 | 9 | 318427 | 9 | 3184270000 |
How many individual forest patches are there? Which forest patch is the largest? Which forest patch is the smallest? Why do you think there are so many patches with an area of exactly 10,000 sq m?
In Part III, we will use two Spatial Analyst tools to bring together the raster layers we created in Part I (habitat quality) and Part II (forest patches). Zonal Geometry calculates several geometry measures, such as area and thickness, for zones in a raster. We will use it to generate a table of statistics about the size and shape of each forest patch. We will also use the Zonal Histogram Tool to tabulate the number of cells of each habitat type within each forest patch and management unit.
Make sure you have the correct answer before moving on to the next step.
The "PatchGeometry" table should have the following information. If your data does not match this, go back and redo the previous step.
OID | Value | Area | Perimeter | Thickness | Xcentroid | ycentroid | Majoraxis | minoraxis | orientation |
---|---|---|---|---|---|---|---|---|---|
0 | 1 | 642010000 | 350600 | 6343.9 | 1688100 | 359206 | 24878.6 | 8214.21 | 81.9311 |
1 | 2 | 580000 | 3800 | 212.1 | 1696040 | 379519 | 631.205 | 292.488 | 84.2196 |
2 | 3 | 1228670000 | 907000 | 6250.3 | 1756350 | 335775 | 36173.2 | 10811.8 | 134.675 |
3 | 4 | 190000 | 2400 | 150 | 1698610 | 378516 | 270.871 | 223.275 | 140.531 |
4 | 5 | 300000 | 2800 | 170.7 | 1699130 | 378353 | 401.382 | 237.911 | 110.363 |
5 | 6 | 10000 | 400 | 50 | 1699560 | 378016 | 56.419 | 56.419 | 90 |
6 | 7 | 130000 | 1600 | 150 | 1700800 | 377131 | 219.193 | 188.785 | 166.224 |
7 | 8 | 10000 | 400 | 50 | 1698360 | 377216 | 56.419 | 56.419 | 90 |
Which field in the "PatchGeometry" table is the equivalent to the "ForestID" field? What are the units of the fields "AREA," "PERIMETER," and "THICKNESS"? What do the values in the fields "XCENTROID," "YCENTROID," "MAJORAXIS," "MINORAXIS", and "ORIENTATION" mean?
The Zonal Histogram tool will create a summary table that contains one row for each unique value in the "Value raster" and one column for each unique value in the "Zone dataset." The tool will calculate the total number of cells for each combination of a unique row and column. The tool can also create a graph based on the output table, which we are going to skip.
Make sure you have the correct answer before moving on to the next step.
The "Habitat_by_Patch" table should have the following information. If your data does not match this, go back and redo the previous step.
oid | Label | Value_2 | Value_3 | FORESTID | EDge_sqm | int_sqM | PCTtotedge | pcttotint |
---|---|---|---|---|---|---|---|---|
0 | 1 | 22207 | 41994 | 1 | 222070000 | 419940000 | 35 | 65 |
1 | 2 | 58 | 0 | 2 | 580000 | 0 | 100 | 0 |
2 | 3 | 74095 | 48772 | 3 | 740950000 | 487720000 | 60 | 40 |
3 | 4 | 19 | 0 | 4 | 190000 | 0 | 100 | 0 |
4 | 5 | 30 | 0 | 5 | 300000 | 0 | 100 | 0 |
What do numbers in the "LABEL" field of the "Habitat_by_MU" mean? Which management unit "use" has the most roads?
Make sure you have the correct answer before moving on to the next step.
The "Habitat_by_MU" table should have the following values. If your data does not match this, go back and redo the previous step.
OID | Label | logging | coservation | habitat | logSqm | conssqm | pcttotlog | pcttotcons |
---|---|---|---|---|---|---|---|---|
0 | 1 | 35322 | 1428 | Low Quality Habitat | 353220000 | 14280000 | 96 | 4 |
1 | 2 | 635978 | 44954 | Medium Quality Habitat | 6359780000 | 449540000 | 93 | 7 |
2 | 3 | 302425 | 167611 | High Quality Habitat | 3024250000 | 1676110000 | 64 | 36 |
Make sure you have the correct answer before moving on to the next step.
The "forestpatchpoly" shapefile should have the following information. If your data does not match this, go back and redo the previous step. Note that this table has been sorted based on "gridcode".
Fid | Shape* | Id | gridcode | ForestID |
---|---|---|---|---|
36 | Polygon | 37 | 1 | 1 |
0 | Polygon | 1 | 2 | 2 |
61 | Polygon | 62 | 3 | 3 |
1 | Polygon | 2 | 4 | 4 |
2 | Polygon | 3 | 5 | 5 |
Why is there such a large range of values for the edge to area ratio results?
How would the results of the analysis change if we used a larger or smaller cell size?
Make sure you have the correct answer before moving on to the next step.
The "Final_Forest_Patches" attribute table should have the following information. If your data does not match this, go back and redo the previous step.
FID | Shape* | Forest ID | Totareasqm | perimeterm | thichnessm | edge_sqm | int_sqm | pcttotedge | pcttotint | edgetoarea |
---|---|---|---|---|---|---|---|---|---|---|
36 | Polygon | 1 | 642010000 | 350600 | 6343.9 | 222070000 | 419940000 | 35 | 65 | 0.05461 |
0 | Polygon | 2 | 580000 | 3800 | 212.1 | 580000 | 0 | 100 | 0 | 0.655172 |
61 | Polygon | 3 | 1228670000 | 90700 | 6250.3 | 740950000 | 487720000 | 60 | 40 | 0.07382 |
1 | Polygon | 4 | 190000 | 2400 | 150 | 190000 | 0 | 100 | 0 | 1.26316 |
2 | Polygon | 5 | 300000 | 2800 | 170.7 | 300000 | 0 | 100 | 0 | 0.933333 |
3 | Polygon | 6 | 10000 | 400 | 50 | 10000 | 0 | 100 | 0 | 4 |
5 | Polygon | 7 | 130000 | 1600 | 150 | 130000 | 0 | 100 | 0 | 1.23077 |
Notice how the default outputs from many of the Spatial Analyst tools are not very easy to understand. It’s worth the time to create more intuitive fields, units, and names while you are doing the analysis. That way you can easily interpret your results later on and share them with others in a meaningful format.
In Part IV, we will finalize our map in ArcGIS, then you will be asked to share your results with the Geog487 AGO group as web maps. As a final step, you will combine the output from the Step-by-Step and Advanced Activity into a web application.
That’s it for the required portion of the Lesson 7 Step-by-Step Activity. Please consult the Lesson Checklist for instructions on what to do next.
Try one or more of the optional activities listed below.
Landsat satellite images were used to digitize the road data we used in this lesson. You can read more about Landsat data on NASA’s website [246]. As of October 2008, Landsat data is available for free to the public. It can be viewed and downloaded from the USGS Earth Explorer Viewer [36].
Note: Try This! Activities are voluntary and are not graded, though I encourage you to complete the activity and share comments about your experience on the lesson discussion board.
Advanced Activities are designed to make you apply the skills you learned in this lesson to solve an environmental problem and/or explore additional resources related to lesson topics.
In the Step-by-Step activity, we used road center-lines from the year 2007 to explore forest fragmentation and edge effects. Using the road centerlines from 2001 (roads01), how many forest patches were located in the study site in 2001? Using the road centerlines from 2021 (roads21), how many forest patches were located in the study site in 2021?
In Lesson 7, we used Buffers, the Reclassify Tool, RegionGroup, ZonalGeometry, and Zonal Histograms to explore how logging roads have degraded tropical rainforests in southeastern Cameroon. Specifically, we determined how many forest patches were created, the area and shape of each forest patch, their edge/area ratio, and the area of edge and interior habitat. We also summarized the habitat type by management unit to see whether conservation areas or logging areas provide the best habitat.
Lesson 7 is worth a total of 100 points.
Peer Review (optional): Explore other students' submission and add a short comment on their discussion post.
2007 Forest Patches Map | Web map is posted to ArcGIS Online and link is made available in Canvas. Map is sufficiently designed and described. (20pts) | Map is linked, but it is missing an element or two (layers, descriptions, symbology, etc.) (15pts) | Map is linked but is missing several elements (map, layers, description) or is poorly designed. (10pts) | Link is missing. (0pts) | 20pts |
---|---|---|---|---|---|
2001 Forest Patches Map | Web map is posted to ArcGIS Online and link is made available in Canvas. Map is sufficiently designed and described. (20pts) | Map is linked, but it is missing an element or two (layers, descriptions, symbology, etc.) (15pts) | Map is linked but is missing several elements (map, layers, description) or is poorly designed. (10pts) | Link is missing. (0pts) | 20pts |
2021 Forest Patches Map | Web map is posted to ArcGIS Online and link is made available in Canvas. Map is sufficiently designed and described. (20pts) | Map is linked, but it is missing an element or two (layers, descriptions, symbology, etc.) (15pts) | Map is linked but is missing several elements (map, layers, description) or is poorly designed. (10pts) | Link is missing. (0pts) | 20pts |
Web Map Application | Web app link is posted and made available in Canvas. App facilitates the direct comparison between at least two of the maps (e.g., the 2007 and 2001 maps or the 2007 and 2021 maps). Symbology is consistent between the two maps. Description of the app and maps sufficiently orients the reader and helps to convey trends. (20pts) | Web app link is posted. Some elements of the assignment are missing, but the app still allows for map comparison. (15pts) | Web app link is present but is missing several elements, does not function properly, or otherwise impairs the ability to compare the two maps. (10pts) | Web app link is missing. (0pts) | 20pts |
TOTAL | 80pts |
If you have anything you'd like to comment on, or add to the lesson materials, feel free to post your thoughts in the Lesson 7 Discussion. For example, what did you have the most trouble with in this lesson? Was there anything useful here that you'd like to try in your own workplace?
This page includes links to resources such as additional readings and websites related to the lesson concepts. Feel free to explore these on your own. If you'd like to suggest other resources for this list, contact the instructor
You have been hired by the Lake Raystown Watershed Council to identify potential sludge disposal sites within a watershed in south-central Pennsylvania. You must take into account the vulnerability of groundwater contamination, distance from surface water, and area of each potential site. To accomplish this task, you will use ArcGIS Spatial Analyst tools to recode and overlay maps depicting important factors that affect inherent vulnerability. You will then combine your results with information about surface water and site area to identify potential sites for sludge disposal.
At the successful completion of Lesson 8, you will have:
If you have questions now or at any point during this lesson, please feel free to post them to the Lesson 8 Discussion.
Sewage sludge is the solid waste created during the process of domestic wastewater treatment. This material is often inadvertently contaminated with many toxic organic and inorganic compounds. Lesson 8 focuses on the identification of suitable locations within the Lake Raystown Watershed, where processed sewage sludge can be applied to the soil surface as part of a controlled bio-degradation treatment alternative. There area two primary advantages of this process: 1) the natural renovative capabilities of the soil are used to further break down residuals that remain after standard wastewater treatment processes, and 2) the nutrient-rich sludge material serves as a beneficial soil amendment which can complement, and in many cases replace, standard fertilization practices.
An important consideration in evaluating the suitability of potential sewage disposal sites is the potential impact such sites might have on existing groundwater quality. There are a number of approaches that can be used to evaluate pollution problems associated with groundwater resources, ranging from very complex to relatively simple. An example of a complex approach is the use of sophisticated computer models (such as the MODFLOW model [257] developed by the U.S. Geological Survey) to track the dispersion of contaminants through the soil profile and beyond. While this approach may help one to accurately quantify contaminant movement and loads, it does have a very serious limitation in the form of very extensive data needs. For this reason, simpler, empirical approaches are often used to evaluate pollution potential. One such approach is the DRASTIC methodology developed by the U.S. Environmental Protection Agency.
"DRASTIC" is an acronym in which each letter stands for one of seven hydrogeological parameters that directly influence the movement of pollutants into and through the soil and sub-soil layers. Measurements within each parameter are assigned DRASTIC Ratings between 1 and 10 based on how they affect the movement of contaminants. Some of the parameters (e.g., depth to groundwater) have a much greater influence on the overall groundwater vulnerability than others. This is incorporated into the DRASTIC Index calculation by assigning weights to each of the parameters based on their relative importance. Areas with higher DRASTIC Index Scores are more likely to experience groundwater contamination in the event of a release than areas with low DRASTIC Index Scores.
The equation to calculate the DRASTIC Index is:
$$(\mathbf{D} \times 5)+(\mathbf{R} \times 4)+(\mathbf{A} \times 3)+(\mathbf{S} \times 2)+(\mathbf{T} \times 1)+(\mathbf{I} \times 5)+(\mathbf{C} \times 3)$$The seven parameters are briefly described below:
The Lake Raystown watershed is located in South Central Pennsylvania and covers an area of approximately 1,000 square miles. Contained within this watershed is Raystown Lake. This man-made recreational lake was created as a flood control dam designed to protect the populated areas from Huntingdon, Pennsylvania to the mouth of the Susquehanna River.
All of the required readings for Lesson 8 are Esri help articles. Although we will demonstrate how to use these tools in the Step-by-Step Activity, the help topics will provide you with a good overview of what the tools will do when executed.
Find the help articles listed below on ArcGIS Pro Resource Center [143] website.
This section provides links to download the Lesson 8 data, along with reference information about each dataset (metadata). Briefly review the information below so you have a general idea of the data we will use in this lesson.
For this lesson, you will be provided with all of the data in the Lesson 8 Data zip file. All of these data were created by your own work organization. While receiving data from an in-house source may seem like a blessing, it often comes without some of the typical information you receive from well-known data clearinghouses. Two of the most common shortcomings are a lack of metadata and projection information. Therefore, it may be difficult to determine the source of the data, the attribute definitions, the scale of the data, and the coordinate system and datum. As distressing as this may sound, there will generally be someone in your office who can provide some reliable information pertaining to the data.
Note: You should not complete this activity until you have read through all of the pages in Lesson 8. See the Lesson 8 Checklist for further information.
Create a new folder in your GEOG487 folder called "L8." Download a zip file of the Lesson 8 Data [264] and save it in your "L8" folder. Extract the zip file and view the contents. Information about all datasets used in the lesson is provided below:
The Step-by-Step Activity for Lesson 8 is divided into three parts. In Part I, we will review the relevant datasets and organize your Map. In Part II, we will create a DRASTIC Groundwater Vulnerability grid. In Part III, we will determine suitable land areas for sewage sludge application sites based on the DRASTIC ratings, distance from surface water, and size of each region.
Note: You should not complete this step until you have read through all of the pages under the Lesson 8 Module. See the Lesson 8 Checklist for further information.
In Part I, we will review the starting datasets and organize the map for analysis.
Since all of the datasets used in this lesson have the same projection, we do not need to be concerned with the order that we load the data.
Do the all of the provided raster grids have the same cell size?
Do all of the input datasets have the same extent?
What are the units of the "VALUE" attribute in the elevation grid?
How many different types of soil and rock types are in the study area?
How wide a buffer was used to create the streams data?
Where is the Lake Raystown Watershed located in relation to the state of Pennsylvania?
In Part II, we will create a series of grids representing the DRASTIC Ratings for each parameter (D -Depth to Water Table, R- Net Recharge, A - Aquifer Media, S - Soil Media, T - Topography, I - Impact of Vadose Zone, and C- Hydraulic Conductivity). The dataset we will use to create each grid is shown in the graphic below. In this section, we will introduce two new spatial analyst concepts: creating slope grids from elevation and reclassifying ranges of values as opposed to unique values.
Make sure you have the correct answer before moving on to the next step.
The "soilgrid.tif" attribute table should have all of the attributes shown below. If your data does not match this, go back and redo the previous step. Be sure to go to Feature to Raster tool > Environments and double-check and the output coordinates and processing extent to the same as "LakeRaystown.” Also, be sure to expand the table columns to view all COUNT totals.
Texture | DRASTIC Rating |
---|---|
Silty Clay Loam | 3 |
Loam | 5 |
Loamy Sand | 6 |
Make sure you have the correct answer before moving on to the next step.
The "s.tif" attribute table should match the example below. If your data does not match this, go back and redo the previous step.
Three of the seven DRASTIC factors (A - Aquifer media, I - Impact of the vadose zone, and C - Hydraulic Conductivity) can be defined on the basis of geology. We will use the Reclassify Tool again to assign DRASTIC ratings corresponding to these three factors for the appropriate surface geology units contained in the geology layer.
Make sure you have the correct answer before moving on to the next step.
The "geologygrid.tif" attribute table should have all of the attributes shown below. If your data does not match this, go back and redo the previous step.
OID | Value | Count | Rock_type |
---|---|---|---|
0 | 1 | 1480784 | Interbedded Sedimentary |
1 | 2 | 643096 | Sandstone |
2 | 3 | 388372 | Shale |
3 | 4 | 256791 | Carbonate |
Rock Type | DRASTIC Rating |
---|---|
Interbedded Sedimentary | 6 |
Sandstone | 6 |
Shale | 2 |
Carbonate | 10 |
Rock Type | DRASTIC Rating |
---|---|
Interbedded Sedimentary | 6 |
Sandstone | 6 |
Shale | 3 |
Carbonate | 10 |
Rock Type | DRASTIC Rating |
---|---|
Interbedded Sedimentary | 2 |
Sandstone | 1 |
Shale | 1 |
Carbonate | 10 |
Make sure you have the correct answer before moving on to the next step.
The "a," "i," and "c" attribute tables should have all of the attributes shown below. If your data does not match this, go back and redo the previous step. Again, be sure to expand the COUNT field to see all the complete values.
When you have data that represents elevation, you can create several different types of raster layers, one is a slope grid. Slope represents steepness, incline, or grade of a line or area. A higher slope value indicates a steeper incline. With Spatial Analyst, it is easy to create a slope layer from elevation data.
Degree vs. Percentage
Be careful when choosing the slope output measurement. There are two ways to express slope values, either as a percent or as a degree. "45 degrees" slope and "45 %" slope are NOT equivalent values.
Degree slope (θ): angle created by a right triangle with sides of length "rise" and "run"
Percent slope: length of "rise"/length of "run" * 100
Topography Range | DRASTIC Rating |
---|---|
0-2 | 10 |
2-6 | 9 |
6-12 | 5 |
12-18 | 3 |
>18 | 1 |
Make sure you have the correct answer before moving on to the next step.
The "t" attribute table should have all of the attributes shown below. If your data does not match this, go back and redo the previous step.
Reclassifying Ranges of Numbers vs. Unique Values
When you need to reclassify data based on ranges of values instead of unique values. For example, notice above that the old value of "2" is specified as the upper bound in the range "0-2" and the lower bound in the range "2-6." What new value, either "10" or "9," will be assigned to old values of "2" in the output grid?
In this case, ArcGIS will assign the old value "2" to a new value of "10," and the old value of "2.0001" to a new value "9" in the output grid. The general rule is that ArcGIS will include the break values themselves in the group that it forms the upper range boundary. Notice that you will encounter this same issue for all break values (e.g., "6", "12", and "18" in the example above).
This is particularly important when the break values themselves are meaningful in your analysis. The most common example of this situation is when you encounter specifications of "less than x" vs. "less than or equal to x" in your requirements. If you want to reclassify values "less than 5" to a new value, you would need to specify a break value of "4.99999999," so the value of "5" is not included in your new category. The particular number of decimals you need to specify will depend on the number of decimals in your input data. For example, if your data layer has five decimal places, then you would set the reclassification thresholds as follows: a.aaaaa - b.bbbbb, b.bbbbb - c.ccccc, and so forth.
See the ArcGIS Help for further information regarding reclassification by range [265].
Compare the "d" grid to the "streams_buffer" shapefile. Do areas near streams have high or low vulnerability?
Which input datasets (d, r, a, s, t, i, c) have the highest DRASTIC rating values?
Do you see any spatial patterns in the individual drastic grids?
Now that you have the required data layers, you can create a DRASTIC groundwater vulnerability grid based upon the DRASTIC index equation. This will involve use of the Raster Calculator to combine several grids in a weighted overlay. The graphic below shows an example of how cell values are updated during the calculation.
Combining raster layers is a simple, yet very important process with Spatial Analyst. You will often find that it is necessary to create a single layer that is comprised of several data sets. The idea is similar to that of performing an overlay with vector layers, in that you are making one out of many, with the major exception that the cell values change based on the expression used.
The addition (+) and multiplication (*) signs are the most common arithmetic operators used to combine raster layers. The plus (+) sign performs an addition with each cell, so the value in a given cell of one grid will be added to the value of the same cell in the next grid, and so on. The multiplication (*) sign, as expected, performs a multiplication based on the values in each cell.
Either of these can be used when the purpose is to simply combine grids, although you should use the same operator for all grids. However, when forming an expression that includes additional operations on individual grids, as in the case above, it is important to understand the precedence that the operators will be performed. In mathematical order of operation rules, multiplication always takes precedence over addition. Hence, in the expression above, the values in the "D" grid will be multiplied by 5 before they are added to the values in the "R" grid. If an expression should occur that is out of precedence, enclose that expression with parentheses, as you would when using a calculator.
Make sure you have the correct answer before moving on to the next step.
The" drastic_index" grid should have the following information. The statistics from the "COUNT" field are also provided. If your data does not match this, go back and redo the previous step.
What do the numbers in the "VALUE" field of the "drastic_index" mean in the real world? For example, do high values represent areas with high or low vulnerability to groundwater pollution?
Which parts of the watershed are most vulnerable to groundwater pollution?
Do any of the parameters have a greater influence on the final results?
Now that the groundwater vulnerability layer has been produced, we can use this data to help find the areas in the watershed most suitable for sludge disposal. Along with this dataset, we also need to incorporate the stream buffer dataset. Remember from previous lessons that it is possible to reclassify grid cells to values of "NoData" to exclude them from your analysis. We will use this technique to remove portions of each dataset that do not meet the relevant criteria. For example, we will reclassify suitable areas within each dataset as "1" and unsuitable areas as "NoData."
You can also do the opposite of this by assigning existing values of "NoData" to more meaningful values. We will use this technique to create a grid of areas that are outside of steam buffers. Then, we will use the Raster Calculator to combine the individual suitability results into one grid. We will then use the "RegionGroup” command to create regions from adjacent cells with the same results. This process is illustrated in the graphic below.
For the purposes of this lesson, we assume that state regulations require the following for a site to be considered for sludge disposal:
The calculation performed in the previous step combines the results of two Boolean operations that are either evaluated as:
TRUE (indicated by a value of 1) OR FALSE (indicated by a value of 0)
We are only interested in cells that meet the criteria (values of 1).
Make sure you have the correct answer before moving on to the next step.
The "OK_DRASTIC.tif" attribute table should have all of the attributes shown below. If your data does not match this, go back and redo the previous step.
Make sure you have the correct answer before moving on to the next step.
The "OK_Streams" attribute table should have all of the attributes shown below. If your data does not match this, go back and redo the previous step.
Make sure you have the correct answer before moving on to the next step.
The "OK2criteria.tif" attribute table should have all of the attributes shown below. If your data does not match this, go back and redo the previous step.
Make sure you have the correct answer before moving on to the next step.
The "OK_Regions" statistics for the "COUNT" field should match the example below. If your data does not match this, go back and redo the previous step.
The last criteria we need to incorporate is - Area (sites greater than 0.5 sq km). We learned in Lesson 5 that you can calculate the area of a raster by multiplying the number of cells by the area of each cell. To calculate the area of regions within a raster, we can use this same method.
Why did we use the number "30" to calculate the area?
Make sure you have the correct answer before moving on to the next step.
The "OK_Area" statistics for the "COUNT" field should match the example below. If your data does not match this, go back and redo the previous step.
Try one or more of the optional activities listed below.
Note: Try This! Activities are voluntary and are not graded, though I encourage you to complete the activity and share comments about your experience on the lesson discussion board.
Advanced Activities are designed to make you apply the skills you learned in this lesson to solve an environmental problem and explore additional resources related to lesson topics.
The regulations for sludge disposal sites were revised to help prevent land use change in the watershed. The new regulations require that sludge disposal can only occur on land that is currently used for agriculture. This will prevent areas such as forests and wetlands from being used as disposal sites.
Find suitable sites in the Lake Raystown Watershed that meet all of the following criteria:
In Lesson 8, we created a DRASTIC Groundwater Vulnerability grid and identified the potentially suitable sites for sludge disposal. We used Spatial Analyst tools to convert vector into raster data, calculate slope, reclassify grids, and combine multiple rasters. Next week, you will apply the skills and techniques you learned in the course to explore an environmental challenge on your own.
Lesson 8 is worth a total of 100 points.
If you have anything you'd like to comment on, or add to the lesson materials, feel free to post your thoughts in the Lesson 8 Discussion. For example, what did you have the most trouble with in this lesson? Was there anything useful here that you'd like to try in your own workplace?
This page includes links to resources such as additional readings and websites related to the lesson concepts. Feel free to explore these on your own. If you'd like to suggest other resources for this list, please send the instructor an email.
You are a GIS Manager for a non-profit environmental organization where you lead a small team of GIS Analysts. A pre-proposal your organization submitted for a funding opportunity made it past the first round of review. You received an invitation to present your proposal to a panel of reviewers, who will choose between your organization and a pool of competitors for the project award.
Your job is to design a GIS work plan, summarize it in a visual presentation, and convince the panel of judges that your team is the best choice. Your role as a leader is to think through the potential opportunities and obstacles, provide a vision for your team to implement and bring in the project.
At the successful completion of the Final Project, you will:
If you have questions now or at any point during this lesson, please feel free to post them to the Final Project Discussion.
This lesson is two weeks in length and is worth a total of 200 points. Please refer to the Course Calendar for specific time frames and due dates. To finish this lesson, you must complete the activities listed below. You may find it useful to print this page out first so that you can follow along with the directions. Simply click the arrow to navigate through the lesson and complete the activities in the order that they are displayed.
If you have questions now or at any point during this lesson, please feel free to post them to the Final Project Discussion.
SDG image retrieved from the United Nations [1]
You have been applying spatial problem-solving skills throughout this course. In previous lessons, I provided step-by-step workflows illustrating how to use GIS to explore specific environmental scenarios.
Now, it is your turn to apply your spatial skills by designing a workflow from scratch. Workflow planning is not a linear process. It involves loops and iterations, and some dead ends along the way.
There are also multiple correct paths in GIS that will lead to the same end product, so you may need to think through a few different options. I find it easier to design a GIS workflow if I create a visual map of the process like the example shown here, from Lesson 3.
Prezi.com is a great tool for planning complex processes. The program makes it is very easy to compile several ideas in one workspace, rearrange them into groups, and add more details later on as your plans solidify. Unlike Microsoft Word and PowerPoint, Prezi does not impose a linear order on your information. Another helpful feature of Prezi is the ability to include relative scale in your brainstorming map, so main ideas are larger than minor details. This approach makes it easier to break your analysis into several pieces, then focus on one piece at a time. With Prezi, you can also embed screenshots, videos, and other media to keep track of your ideas in a visual manner.
You may make a copy of a Prezi blue circles template [272] and then edit it for your own project. If you are not a Prezi fan, I also recommend Canva.com [273] or ArcGIS StoryMaps [274]. Canva is a cloud-based program that also allows you to map out your own ideas or use built-in templates. And, you will read more about ArcGIS StoryMaps in the required readings and videos section [275].
GIS professionals who communicate well are the ones who get ahead. Effective communication in the GIS field involves researching your audience, choosing language that appeals to them, and communicating how you add value. Most clients and end-users are not interested in the nuts and bolts of GIS. They want to know how they can make better decisions, save money or time, more easily share information, or better reach their organization's goals. A general rule to follow is "simpler is better." The more you can make your products self-explanatory and appealing to your target audience, the more likely your audience is to use them and value your work.
My first boss told me a story that has stuck with me for many years. He said:
"When you take your car to the auto shop, do you want to receive a lengthy report about the model number of the tools the mechanic used, how high they had to raise your car on the lift or the particular order of steps they followed to change your oil or rotate your tires? Probably not. You likely just want to know when you can get your car back, how much it costs, any serious issues you need to address and what will happen if you wait too long to fix them. It's the same with GIS. Your client hired you to handle the technical details because it is not something they are particularly interested in, have time for, or are good at themselves. It's your job to apply your skills to their problem, then translate the results into terms that they care about."
I frequently experience GIS students and professionals tossing around data formats and Esri command names as though they were common verbs and nouns. To most prospective clients and proposal review teams, these are unknown terms. and they will quickly lose interest. My advice here is to avoid them, but when you feel they are necessary, you must remember to provide a definition.
You will need to apply these skills often - for example: when pitching a new project to a client, convincing your boss that your GIS department or team should receive funding, writing grant applications, responding to Request for Proposals (RFPs), presenting your work at technical conferences, or marketing your own portfolio and skills to potential new employers. Creating a communication plan is also an iterative process. Rarely is a first draft a final product.
Consider the questions below as you design your work plan. (You need to demonstrate evidence in your visual work plan that you considered questions in each section for full credit).
Tables | Vectors | Rasters | Present & Share |
---|---|---|---|
Field Calculator | Clip | Raster to Polygon | ArcGIS Explorer Online |
Summary Statistics | Union | Reclassify (Unique Values & Ranges) | Google Earth |
Join | Merge | Reclassify NoData to Values | Screen Captures/Videos |
Calculate Geometry | Dissolve | Tabulate Area | Prezi |
Recode Missing Data | Buffer | Environment Settings | Animations |
Convert units | Feature to Raster | Mosaic | Multi-Dataframe Maps |
Plot X,Y Coordinates | Interpolate to Raster | Raster Calculator - Clip | Graphs |
Change Projection | Raster Calculator - Mathematical Overlay | ArcGIS Online Maps | |
Export Selection | Raster Calculator - Select by Expression | ArcGIS Online Web Apps | |
Region Group | Publish Web Services | ||
Zonal Geometry | |||
Zonal Histogram | |||
Slope | |||
Extract by Attributes | |||
Change Projection |
In the Final Project, we applied spatial problem-solving skills and concepts covered in the course to design a GIS work plan from scratch and pitch it to a client.
The Final Project is worth a total of 200 points (20% of total course points).
Work Plan (Mastery) | Demonstrates conceptual understanding of GIS concepts and operations. (25pts) | Demonstrates an understanding of most GIS concepts, but appears unclear about some. (15pts) | Demonstrates a complete lack of understanding of most GIS concepts and operations. (0pts) | 25pts |
---|---|---|---|---|
Work Plan (Accuracy) | Work plan accurately represents a real-world scenario. (25pts) | Work plan is incomplete or is in some ways unrealistic. (15pts) | Work plan is not plausible to a real-world scenario. (0pts) | 25pts |
Work Plan (Creativity) | Work plan is designed in a creative way utilizing a variety of tools. (15pts) | Work plan shows some creativity but is largely out-of-the-box ArcGIS. (8pts) | Work plan contains little creativity beyond basic ArcGIS tools. (0pts) | 15pts |
Work Plan (Effective Communication) | Work plan is designed in a way that effectively communicates the scenario. (15pts) | Work plan communicates all required information but is a bit hard to understand. (8pts) | Work plan is poorly designed and is confusing or overwhelming to the reader. (0pts) | 15pts |
Work Plan (Follow Instructions) | Work plan includes all required elements (Prezi/Canva/StoryMap, raster/vector/xy, >5 steps, 3 screen captures, etc.). (20pts) | Work plan is missing an element or two. (10pts) | Work plan is missing several required elements. (0pts) | 20pts |
Video Presentation (Effective Communication) | Video conforms to instructions and was produced in a manner that renders it compelling and informative. (25pts) | Video adequately meets requirements but appears hastily produced making it more difficult to follow and understand. (15pts) | Video was poorly produced making it difficult or impossible for the audience to understand the work plan. (0pts) | 25pts |
Video Presentation (Follow Instructions) | <5-minute video is linked. (25pts) | Video is linked but does not conform to instructions. (15pts) | Video is missing. (0pts) | 25pts |
Reflection | Post includes 200-300 words sufficiently discussing the favorite, hardest, and easiest parts of this project. (25pts) | Post is present but does not adequately discuss the experience of working on this project. (15pts) | Posting is missing. (0pts) | 25pts |
Peer Review | A 200-300 word post about another student's project is present and includes 2 positive comments and 1 suggestion for improvement. (25pts) | Post is present but does not adequately evaluate another student's project. (15pts) | Post is missing. (0pts) | 25pts |
TOTAL | 200pts |
If you have anything you'd like to comment on, or add to the lesson materials, feel free to post your thoughts in the Final Project Discussion. For example, what did you have the most trouble with in this lesson? Was there anything useful here that you'd like to try in your own workplace?
This page includes links to resources such as additional readings and websites related to the lesson concepts. Feel free to explore these on your own. If you'd like to suggest other resources for this list, please send the instructor an email.
Links
[1] https://www.un.org/sustainabledevelopment/news/communications-material/#FAQ
[2] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/MillenniumEcosysAssess2005.pdf
[3] https://www.millenniumassessment.org/documents/document.356.aspx.pdf
[4] https://www.grida.no/publications/503
[5] https://ceq.doe.gov/
[6] https://www.epa.gov/history/origins-epa
[7] https://www.ecfr.gov/cgi-bin/text-idx?SID=49d4e1271eb649bbbbefc2c9b076688d&mc=true&node=pt40.37.1502&rgn=div5
[8] https://cdxnodengn.epa.gov/cdx-enepa-II/public/action/eis/search
[9] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson01/FEIS_Grande%20Prairie_2014-12-29.pdf
[10] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/FEIS%20Cover%20Abstract%20Acronyms%20Abbreviations%20ES%20TOC%20Text%20Tables%20Diagrams.pdf
[11] https://www.wapa.gov/wp-content/uploads/2023/04/Rail-Tie-Wind-ROD-Final.pdf
[12] http://www.youtube.com/watch?v=zi0wsEZ71Rs
[13] http://www.uaja.com/
[14] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/BeneficialReuseReport.pdf
[15] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/OdorControlReport.pdf
[16] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/OdorForm.pdf
[17] https://docs.google.com/forms/d/e/1FAIpQLScvwaFdE3nwov64reojORQIAtFHMhs0GFC2IDLQemioH_cJfA/viewform
[18] http://firms2.modaps.eosdis.nasa.gov/
[19] https://firms2.modaps.eosdis.nasa.gov/map
[20] https://firms.modaps.eosdis.nasa.gov/map/
[21] https://firms2.modaps.eosdis.nasa.gov/usfs/active_fire/
[22] https://storymaps.arcgis.com/stories/9c011e1e569845c092eba5c377259202
[23] https://www.airnow.gov/fires/using-airnow-during-wildfires/
[24] https://www.usgs.gov/products/data
[25] https://geodata.nal.usda.gov/geonetwork/srv/eng/catalog.search#/home
[26] https://www.nrel.gov/gis/data-tools.html
[27] https://data-usfs.hub.arcgis.com/
[28] https://fws.gov/program/geospatial-data-services/what-we-do
[29] https://www.epa.gov/geospatial/epa-geospatial-data
[30] https://data.noaa.gov/datasetsearch/
[31] https://gis-fema.hub.arcgis.com/
[32] https://www.census.gov/programs-surveys/geography.html
[33] https://www.usgs.gov/core-science-systems/national-geospatial-program
[34] http://datagateway.nrcs.usda.gov/
[35] http://www.epa.gov/geospatial/
[36] https://earthexplorer.usgs.gov/
[37] https://www.usgs.gov/national-hydrography/national-hydrography-dataset
[38] https://waterdata.usgs.gov/usa/nwis/nwis
[39] http://dwtkns.com/srtm30m/
[40] https://glovis.usgs.gov/
[41] https://neo.gsfc.nasa.gov/
[42] https://sedac.ciesin.columbia.edu/
[43] http://www.pasda.psu.edu/
[44] http://www.glo.texas.gov/land/land-management/gis/
[45] https://www.opendataphilly.org/
[46] https://egis-lacounty.hub.arcgis.com/
[47] https://guides.libraries.psu.edu/c.php?g=376207&p=5296031
[48] http://guides.libraries.psu.edu/c.php?g=376207&p=5296082
[49] https://www.colorado.edu/libraries/libraries/earth-sciences-map-library/map-library-collection
[50] https://opentopography.org/
[51] https://www.indexdatabase.de/
[52] https://asf.alaska.edu/
[53] https://scihub.copernicus.eu/
[54] https://data.humdata.org/
[55] https://www.un.org/geospatial/mapsgeo
[56] https://ladsweb.modaps.eosdis.nasa.gov/view-data/
[57] https://guides.libraries.psu.edu/c.php?g=376207&p=5296088
[58] http://DIVA-GIS Free Spatial Data
[59] https://www.nrsc.gov.in/EOP_irsdata_Objective_New
[60] https://earth.esa.int/eogateway/catalog
[61] https://gis.ducks.org/
[62] http://www.naturalearthdata.com/
[63] https://audubon.maps.arcgis.com/home/index.html
[64] https://geospatial.tnc.org/
[65] http://geospatial.tnc.org/pages/data
[66] https://livingatlas.arcgis.com
[67] https://opendata.arcgis.com/about
[68] https://nwt.lternet.edu/other-niwot-datasets
[69] http://www.datacommons.psu.edu/default.html
[70] https://www.chesapeakeconservancy.org/conservation-innovation-center/high-resolution-data/lulc-data-project-2022/
[71] http://www.nationalmap.gov
[72] https://storymaps.arcgis.com/stories/f0067611444243799970465d3cee113a
[73] https://livingatlas.arcgis.com/en/home/
[74] https://doc.arcgis.com/en/web-appbuilder/create-apps/widget-swipe.htm
[75] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson02/L2Data.zip
[76] https://www.arcgis.com/home/item.html?id=28f49811a6974659988fd279de5ce39f
[77] http://www.arcgis.com/home/item.html?id=10df2279f9684e4a9f6a7f08febac2a9
[78] https://www.arcgis.com/home/item.html?id=30d6b8271e1849cd9c3042060001f425
[79] https://www.mrlc.gov/viewer/
[80] https://www.mrlc.gov/data
[81] https://data.usgs.gov/datacatalog/data/USGS:60cb3da7d34e86b938a30cb9
[82] https://www.mrlc.gov/downloads/sciweb1/shared/mrlc/metadata/nlcd_2021_land_cover_l48_20230630.xml
[83] https://doi.org/10.1016/j.isprsjprs.2020.02.019
[84] https://www.mrlc.gov/geoserver/mrlc_display/NLCD_2021_Land_Cover_L48/wms?service=WMS&request=GetCapabilities
[85] https://www.arcgis.com/apps/mapviewer/index.html?layers=cfcb7609de5f478eb7666240902d4d3d
[86] https://pennstate.maps.arcgis.com/home/index.html
[87] https://pennstate.maps.arcgis.com/home/
[88] https://apps.nationalmap.gov/viewer/
[89] https://pennstate.maps.arcgis.com/home/search.html?q=gallery%20&start=1&sortOrder=true&sortField=relevance&restrict=false&focus=applications-web
[90] https://livingatlas.arcgis.com/livefeeds-status/
[91] http://resources.arcgis.com/en/tutorials/
[92] https://www.usgs.gov/core-science-systems/national-geospatial-program/training
[93] http://www.cec.org/tools-and-resources/north-american-environmental-atlas
[94] http://changematters.esri.com/compare
[95] https://enviro.epa.gov/enviro/em4ef.home
[96] http://gcmd.nasa.gov/KeywordSearch/Keywords.do?Portal=GCMD_Services&KeywordPath=ServiceParameters%7CWEB+SERVICES&MetadataType=1&lbnode=mdlb2
[97] https://hazards.fema.gov/femaportal/wps/portal/
[98] https://www.esri.com/arcgis-blog/products/arcgis-pro/mapping/projection-on-the-fly-and-geographic-transformations/
[99] https://www.fws.gov/refuge/ottawa/
[100] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson03/Metzger_Marsh_Restoration.pdf
[101] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson03/Invasive_Species.pdf
[102] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson03/Crane_Creek.pdf
[103] https://www.glri.us/node/492
[104] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson03/L3_Data.zip
[105] http://www.arcgis.com/home
[106] https://www.arcgis.com/home/item.html?id=b834a68d7a484c5fb473d4ba90d35e71
[107] https://naip-usdaonline.hub.arcgis.com/
[108] https://www.arcgis.com/home/item.html?id=cdd3c41366b545af9c1ff32a2c180848
[109] https://gis.apfo.usda.gov/arcgis/rest/services/
[110] https://gis.apfo.usda.gov/arcgis/rest/services/NAIP/USDA_CONUS_PRIME/ImageServer/info/metadata
[111] https://landscape10.arcgis.com/arcgis/rest/services/USA_Federal_Lands/ImageServer
[112] https://www.arcgis.com/home/item.html?id=745ed874c1394da3a9aae50267c9e049
[113] http://services.arcgis.com/QVENGdaPbd4LUkLV/arcgis/rest/services/National_Wildlife_Refuge_System_Boundaries/FeatureServer
[114] http://www.fws.gov/wetlands/data/Mapper.html
[115] https://www.ogc.org/standards/wms#downloads
[116] http://www.fws.gov/wetlands/Data/Data-Download.html
[117] http://www.fws.gov/wetlands/Data/Web-Map-Services.html
[118] https://www.fws.gov/wetlands/data/NSDI-Wetlands-Layer.html
[119] http://www.fws.gov/wetlands/Documents/National-Spatial-Data-Infrastructure-Wetlands-Layer-Fact-Sheet.pdf
[120] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson03/glcwc_cwi_polygon.zip
[121] http://glc.org/
[122] https://www.glc.org/wp-content/uploads/2016/10/CWC-GreatLakesCoastalWetlandsInventory-Metadata.pdf
[123] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/Hydrogeomorphic%20Classification%20for%20Great%20Lakes%20Coastal%20Wetlands.pdf
[124] https://www.fws.gov/wetlandsmapservice/services/Wetlands/MapServer/WMSServer?
[125] http://www.fws.gov.wms
[126] https://www.fws.gov/program/national-wetlands-inventory/metadata
[127] https://www.glc.org/library/2008-great-lakes-coastal-wetland-monitoring-plan
[128] https://www.glc.org/wp-content/uploads/2016/10/CWC-GLWetlandsInventory-ClassificationScheme.pdf
[129] https://pro.arcgis.com/en/pro-app/help/data/geodatabases/overview/arcgis-field-data-types.htm
[130] https://www.google.com/earth/versions/#download-pro
[131] http://www.greatlakesphragmites.net/management/programs-and-projects/
[132] https://geospatial.ohiodnr.gov/
[133] http://www.ducks.ca/initiatives/gis-mapping-applications/
[134] https://www.usgs.gov/centers/glsc
[135] https://www.greatlakesphragmites.net/
[136] http://earthexplorer.usgs.gov
[137] https://www.usgs.gov/centers/eros
[138] https://www.glerl.noaa.gov/data/wlevels/#observations
[139] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson05/2009-3_phragmites.pdf
[140] https://www.vims.edu/newsandevents/topstories/2019/ccrmp.php
[141] https://cmap22.vims.edu/VACoastalResourcesTool/
[142] https://cmap22.vims.edu/VACoastalResourcesTool/?page=CoastalViewerPage
[143] https://www.esri.com/en-us/arcgis/products/arcgis-pro/resources
[144] https://pro.arcgis.com/en/pro-app/help/mapping/layer-properties/import-symbology-from-another-layer.htm
[145] https://pro.arcgis.com/en/pro-app/help/mapping/animation/overview-of-animation.htm
[146] https://pro.arcgis.com/en/pro-app/help/mapping/animation/author-a-new-animation.htm
[147] https://pro.arcgis.com/en/pro-app/help/mapping/animation/view-the-animation-s-keyframes.htm
[148] https://pro.arcgis.com/en/pro-app/help/analysis/geoprocessing/charts/charts-quick-tour.htm
[149] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson04/L4Data.zip
[150] https://www.glerl.noaa.gov/data/dashboard/GLD_HTML5.html
[151] https://www.fws.gov/refuge/ottawa
[152] https://www.friendsofottawanwr.org/refuge-challenges.html
[153] http://sws.org/
[154] http://www.epa.gov/owow/wetlands/
[155] https://www.invasivespeciesinfo.gov/
[156] http://glc.org
[157] https://ohiodnr.gov/static/documents/wildlife/fish-management/Ohio%20AIS%20State%20Management%20Plan.pdf
[158] https://www.epa.gov/wetlands/classification-and-types-wetlands#undefined
[159] http://www.fws.gov/midwest/endangered/clams/zebra.html
[160] https://www.usgs.gov/centers/eros/science/national-land-cover-database?qt-science_center_objects=0#qt-science_center_objects
[161] https://www.usgs.gov/centers/wgsc/science/land-cover-trends?qt-science_center_objects=0#qt-science_center_objects
[162] https://www.mdpi.com/2073-445X/11/2/298/htm
[163] http://pubs.usgs.gov/fs/2012/3020/fs2012-3020.pdf
[164] https://nca2018.globalchange.gov/chapter/5/
[165] http://www.usgs.gov/centers/eros/science/eyes-earth
[166] https://www.usgs.gov/centers/eros/eyes-earth-episode-3-national-land-cover-database
[167] https://d9-wret.s3.us-west-2.amazonaws.com/assets/palladium/production/s3fs-public/atoms/audio/EOE-RogerAuch-LandUse.mp3
[168] https://www.usgs.gov/media/audio/eyes-earth-episode-69-thirty-years-land-change-us
[169] https://pro.arcgis.com/en/pro-app/help/main/welcome-to-the-arcgis-pro-app-help.htm
[170] https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/understanding-reclassification.htm
[171] https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/reclassify.htm
[172] https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/tabulate-area.htm
[173] https://www.usgs.gov/media/videos/landsat-action-land-cover-and-land-cover-change-tom-loveland
[174] https://www.usgs.gov/special-topics/lcmap/data
[175] https://eros.usgs.gov/lcmap/viewer/index.html
[176] https://www.usgs.gov/faqs/how-do-changes-climate-and-land-use-relate-one-another-1?qt-news_science_products=0#qt-news_science_products
[177] https://youtu.be/oyiNyWQeysI
[178] https://www.wri.org/insights/7-things-know-about-ipccs-special-report-climate-change-and-land#:~:text=The%20way%20we're%20using,for%20farms%2C%20drives%20these%20emissions.
[179] https://journals.ametsoc.org/doi/10.1175/2009BAMS2769.1
[180] https://journals.ametsoc.org/doi/pdf/10.1175/2009BAMS2769.1
[181] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson05_2023/L5Data.zip
[182] http://www.pasda.psu.edu
[183] http://colorbrewer2.org/
[184] https://pubs.er.usgs.gov/publication/70004689
[185] https://pubs.er.usgs.gov/publication/pp964
[186] https://agsci.psu.edu/aec/projects/greening-the-lower-susquehanna
[187] http://lcluc.umd.edu/
[188] http://pubs.usgs.gov/fs/2012/3020/
[189] http://www.epa.gov/eco-research/multiresolution-land-characteristics-mrlc-consortium
[190] https://coast.noaa.gov/digitalcoast/data/
[191] https://www.usgs.gov/core-science-systems/nli/landsat/global-land-survey-gls?qt-science_support_page_related_con=0#qt-science_support_page_related_con
[192] https://www.usgs.gov/centers/eros/science/usgs-eros-archive-land-cover-products-global-land-cover-characterization-glcc?qt-science_center_objects=0#qt-science_center_objects
[193] http://modis-land.gsfc.nasa.gov/
[194] https://www.ipcc.ch/
[195] https://unfccc.int/topics/land-use/workstreams/redd/what-is-redd#
[196] https://unfccc.int/
[197] https://pro.arcgis.com/en/pro-app/help/analysis/spatial-analyst/basics/what-is-the-spatial-analyst-extension.htm
[198] https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/an-overview-of-the-spatial-analyst-toolbox.htm
[199] https://pro.arcgis.com/en/pro-app/help/analysis/spatial-analyst/performing-analysis/the-analysis-environment-of-spatial-analyst.htm
[200] https://pro.arcgis.com/en/pro-app/help/analysis/geoprocessing/basics/geoprocessing-environment-settings.htm
[201] https://pro.arcgis.com/en/pro-app/tool-reference/geostatistical-analyst/an-overview-of-the-interpolation-toolset.htm
[202] https://pro.arcgis.com/en/pro-app/help/analysis/geostatistical-analyst/how-inverse-distance-weighted-interpolation-works.htm
[203] https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/an-overview-of-the-zonal-tools.htm
[204] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/image/lesson06/Pro/L6_Data.zip
[205] http://ifri.forgov.org/
[206] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/image/lesson05/Pro/Lesson5ExportTable.png
[207] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson05/Calculating_CO2_Sequestration_by_Trees.pdf
[208] https://www.fs.usda.gov/ecosystemservices/pdf/estimates-forest-types.pdf
[209] https://www.uaex.uada.edu/publications/pdf/FSA-5017.pdf
[210] http://www.onlineconversion.com
[211] https://www.c2es.org/content/california-cap-and-trade/
[212] https://www.fao.org/redd/areas-of-work/national-forest-monitoring-system/en/
[213] http://unfccc.int/2860.php
[214] http://www.ipcc.ch/
[215] https://www.ipcc.ch/reports/
[216] https://www.fs.usda.gov/ccrc/
[217] https://unfccc.int/climate-action/united-nations-carbon-offset-platform?gclid=CjwKCAjwh4ObBhAzEiwAHzZYU2suPgbJ_x7pI0m4SS7O4jBSQAd1uhcH3Rdj3kf8wB-coLkOqsoxwRoCj_gQAvD_BwE
[218] https://www.greenclimate.fund/redd
[219] http://www.treebenefits.com/calculator/
[220] https://www.psu.edu/news/research/story/research-help-private-forest-owners-manage-woodlands-ecosystem-services/
[221] https://rdcu.be/dmDiZ
[222] https://www.ipcc.ch/working-group/wg2/?idp=158
[223] https://www.ucsusa.org/sites/default/files/2019-10/deforestation-success-stories-2014.pdf
[224] http://www.uncclearn.org/wp-content/uploads/library/13052015unepviet2.pdf
[225] https://crsreports.congress.gov/product/pdf/R/R46952
[226] http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=brazil+clear+cut&sll=-3.873161,-54.241219&sspn=0.135817,0.125828&ie=UTF8&filter=0&rq=1&ev=zi&radius=4.34&ll=-3.875216,-54.241219&spn=0.140098,0.125828&t=h&z=13
[227] https://news.mongabay.com/2013/03/roads-could-help-protect-the-environment-rather-than-destroy-it-argues-nature-paper/
[228] https://onlinelibrary.wiley.com/doi/full/10.1002/ece3.7027
[229] https://pro.arcgis.com/en/pro-app/tool-reference/conversion/converting-features-to-raster-data.htm
[230] https://pro.arcgis.com/en/pro-app/tool-reference/conversion/an-overview-of-the-from-raster-toolset.htm
[231] https://pro.arcgis.com/en/pro-app/tool-reference/conversion/an-overview-of-the-to-raster-toolset.htm
[232] https://pro.arcgis.com/en/pro-app/tool-reference/data-management/mosaic-to-new-raster.htm
[233] https://pro.arcgis.com/en/pro-app/tool-reference/image-analyst/how-raster-calculator-works.htm
[234] https://pro.arcgis.com/en/pro-app/help/analysis/geoprocessing/charts/histogram.htm
[235] https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/zonal-histogram.htm
[236] https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/region-group.htm
[237] https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/h-how-zonal-geometry-works.htm
[238] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/image/lesson07/Pro/L7Data.zip
[239] https://www.wri.org/initiatives/forest-atlases#project-tabs
[240] https://data.globalforestwatch.org/documents/7af94f001fde4955a19907f7864aa9cf/about
[241] https://data.humdata.org/dataset/cod-ab-cmr?
[242] https://www.openstreetmap.org/
[243] https://www.globalforestwatch.org/map/?map=eyJjZW50ZXIiOnsibGF0Ijo4LjU5NDQ4MzAwNjE3NTk4LCJsbmciOjE1LjU4NzM5NjAxODk5MDEwNn0sInpvb20iOjQuNzQwMjk0NTg0NTg0ODQ4fQ%3D%3D&utm_campaign=treecoverloss2022&utm_medium=bitly&utm_source=GlobalForestReview
[244] https://colorbrewer2.org/
[245] http://www.globalforestwatch.org/
[246] https://landsat.gsfc.nasa.gov/
[247] http://pdf.wri.org/gfw_centralafrica_full.pdf
[248] https://pfbc-cbfp.org/forests-2008.html
[249] http://data.globalforestwatch.org/datasets/000078d77c404100be9d1ab027d1fa9e
[250] http://www.wri.org/blog/2014/02/9-maps-explain-worlds-forests
[251] http://www.wri.org/resources
[252] https://www.globalforestwatch.org/map/?map=eyJjZW50ZXIiOnsibGF0Ijo1LjkxODMyMjY1MDM1NzMwOSwibG5nIjoxMy4yOTcyNDUzMDIwMzIyNzV9LCJ6b29tIjo1LjI4ODM1MzUwMTUwMTcxLCJkYXRhc2V0cyI6W3siZGF0YXNldCI6ImludGVncmF0ZWQtZGVmb3Jlc3RhdGlvbi1hbGVydHMtOGJpdCIsIm9wYWNpdHkiOjEsInZpc2liaWxpdHkiOnRydWUsImxheWVycyI6WyJpbnRlZ3JhdGVkLWRlZm9yZXN0YXRpb24tYWxlcnRzLThiaXQiXX0seyJkYXRhc2V0IjoicG9saXRpY2FsLWJvdW5kYXJpZXMiLCJsYXllcnMiOlsiZGlzcHV0ZWQtcG9saXRpY2FsLWJvdW5kYXJpZXMiLCJwb2xpdGljYWwtYm91bmRhcmllcyJdLCJvcGFjaXR5IjoxLCJ2aXNpYmlsaXR5Ijp0cnVlfV19&mapMenu=eyJkYXRhc2V0Q2F0ZWdvcnkiOiJmb3Jlc3RDaGFuZ2UifQ%3D%3D
[253] http://www.youtube.com/watch?v=Oe1RYWBuhrE
[254] https://www.youtube.com/watch?v=pJD4_lwyy68
[255] https://www.youtube.com/watch?v=xHSRCeU1Hdc&feature=related
[256] https://youtu.be/a-aXuEU2Z4A
[257] http://water.usgs.gov/nrp/gwsoftware/modflow.html
[258] https://pro.arcgis.com/en/pro-app/tool-reference/3d-analyst/how-slope-works.htm
[259] https://pro.arcgis.com/en/pro-app/help/analysis/spatial-analyst/mapalgebra/what-is-map-algebra.htm
[260] https://pro.arcgis.com/en/pro-app/help/analysis/spatial-analyst/mapalgebra/an-overview-of-the-rules-for-map-algebra.htm
[261] https://pro.arcgis.com/en/pro-app/arcpy/spatial-analyst/an-overview-of-the-map-algebra-operators.htm
[262] https://pro.arcgis.com/en/pro-app/help/analysis/spatial-analyst/mapalgebra/working-with-operators.htm
[263] https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/extract-by-attributes.htm
[264] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson08/L8Data.zip
[265] https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/reclass-by-ranges-of-values.htm
[266] http://raystown.uslakes.info/
[267] https://www.nab.usace.army.mil/Missions/Dams-Recreation/Raystown/
[268] https://nepis.epa.gov/Exe/ZyNET.exe/20007KU4.TXT?ZyActionD=ZyDocument&Client=EPA&Index=1986+Thru+1990&Docs=&Query=&Time=&EndTime=&SearchMethod=1&TocRestrict=n&Toc=&TocEntry=&QField=&QFieldYear=&QFieldMonth=&QFieldDay=&IntQFieldOp=0&ExtQFieldOp=0&XmlQuery=&File=D%3A%5Czyfiles%5CIndex%20Data%5C86thru90%5CTxt%5C00000001%5C20007KU4.txt&User=ANONYMOUS&Password=anonymous&SortMethod=h%7C-&MaximumDocuments=1&FuzzyDegree=0&ImageQuality=r75g8/r75g8/x150y150g16/i425&Display=hpfr&DefSeekPage=x&SearchBack=ZyActionL&Back=ZyActionS&BackDesc=Results%20page&MaximumPages=1&ZyEntry=1&SeekPage=x&ZyPURL
[269] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson08/GroundWaterPollution%20in%20Ohio.pdf
[270] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson08/Evans_1997.pdf
[271] https://www.e-education.psu.edu/geog487/sites/www.e-education.psu.edu.geog487/files/activities/lesson08/Brandt_1989.pdf
[272] https://prezibase.com/shop/blue-circles-free-prezi-template/
[273] https://www.canva.com/
[274] https://www.esri.com/en-us/arcgis/products/arcgis-storymaps/resources
[275] https://www.e-education.psu.edu/geog487/node/177
[276] https://community.esri.com/t5/esri-training-blog/use-the-five-step-gis-analysis-process/ba-p/899436
[277] http://www.directionsmag.com/articles/thinking-spatially/123985
[278] https://mediaspace.esri.com/media/t/1_873kz7cj
[279] http://storymaps.esri.com/downloads/Building%20Story%20Maps.pdf
[280] https://www.esri.com/arcgis-blog/products/story-maps/mapping/5-principles-of-effective-storytelling/
[281] https://youtu.be/8wY14zHDmEs
[282] https://www.youtube.com/watch?v=2dwZZPj707I&feature=share&list=PL4F5158389507E395
[283] http://storymaps.esri.com/downloads/Telling%20Stories%20with%20Maps.pdf
[284] https://www.esri.com/about/newsroom/arcwatch/add-audio-to-your-esri-story-map-tour-app/
[285] https://storymaps.arcgis.com/collections/2a13814196244a15b185563628593d00
[286] https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstorymaps.arcgis.com%2Fstories%2F42b1a6fe6a524b578becd12c0bee4b4c&data=05%7C01%7Cmgz1%40psu.edu%7C22c6b666f3c64f141b8608dbc05780c5%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C638315254272430239%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=%2FhHUghuCxD5w1hkjRQAAiqIJFuWTmsDfLJTTe9si7Cs%3D&reserved=0
[287] http://www.audubon.org/plover
[288] https://www.esri.com/about/newsroom/insider/five-compelling-story-maps-for-earth-day-2015/
[289] https://youtu.be/sVHmptWMo_U
[290] http://www.arcgis.com/apps/Compare/storytelling_tabbed/index.html?appid=f537896f8e30481e901938eb049a73a0
[291] https://www.duarte.com/resources/books/slideology/
[292] https://hbr.org/2020/01/what-it-takes-to-give-a-great-presentation
[293] http://grants.gov/
[294] https://www.techsoup.org/support/articles-and-how-tos/overview-of-the-rfp-process