This is the course outline.
The geologic record is an incredibly detailed archive of Earth's history. Visit the Library of Congress and you can find out any information on the history of the United States of America. Only a very small part of the history of the Earth has been recorded by humans. Much of the history is, in fact, contained within naturally-formed, geologic materials. Sample sedimentary rock sequences and ice cores, and you can glean an incredibly detailed record of Earth's climate, environment, and life history. Earth's history book goes back over four billion years, but interpreting this history is not nearly as simple as reading the history of our nation. The Earth's book has been buried under hundreds and thousands of meters of rock and ice and that has altered the signals that geologists use to reconstruct climate, environment, and life history. Imagine a history book that has been burned, soaked, and torn apart many times, and you might then understand the difficulty geologists have interpreting the history of the Earth.
Over the last century, geologists have made remarkable progress unraveling Earth's history, and what an amazing record it is! As we will learn in this module, we now know that there are times in the past when palm trees grew on the shores of Alaska and on the plains of Wyoming, and when reptiles colonized the islands of northern Canada. There are other times when ice sheets apparently encircled the entire globe and, more recently, when an armada of giant icebergs swept off Canada and Greenland and covered large swaths of the Atlantic Ocean. Crucially, for today’s climate the geological record tells us that Earth has not been as warm as it is today for 125,000 years!.
We will learn in this class that climate and environmental change is threatening modern ecosystems, and that this threat will increase substantially in the future. But Earth history informs us that there are times in the past when 90% of species in the ocean were eradicated during mass extinction events, but the remaining 10% were able to survive, and, in fact, take advantage of the open niche space. Life in the past was extraordinarily resilient to some of the harshest environmental changes one can imagine. For example, the asteroid impact that wiped out the dinosaurs 65 million years ago caused large wildfires, followed by weeks or months of nearly complete darkness, and highly acidic and possibly toxic oceans. Yet, life hung on by a thread. If life could survive that level of harshness, then why are many ecosystems today in such distress? This is a central question of modern ecology, and something we will explore in great detail later in the course.
But let's start by discussing how Earth's great historical archive is recorded. As we will see shortly, the archive is actually built from a number of materials besides sediments and ice. Corals, stalactites formed in caves, and trees also carry signals of the environmental conditions when they were living or were formed. But here, we will focus on sediments and ice that carry the majority of the historical record.
The sediment archive covers the entire record of the Earth. In fact, the archive holds the majority of the history, especially in what geologists call “deep time”, before the last million years or two. Sediments are deposited in a wide range of environments including lakes, rivers, deserts, and the ocean all the way from the beach to the deep sea. These sediments contain a range of different particles, depending on where their material derives from. Sediments formed on land are largely made of what is known as clastic materials, minerals derived from the weathering of rocks. Some of these environments such as lakes also contain the fossilized remains of living organisms. Marine sediments, especially those that were laid down in the deep ocean, are largely composed of these fossils. Sediments are deposited by water, wind, and ice, in a “layer cake” fashion, with the older layers underlying the newer layers. Each layer represents a unique period in Earth’s history and preserves the environmental snapshot of that time interval. One of the main challenges with this record is determining how old individual layers are. To do this, geologists use a combination of the fossils in the layers, the orientation of magnetic grains, as well as minerals that contain radioactive decay products such as carbon-14 and lead. However, with a great deal of intensive and laborious work, geologists have provided the age control that enables the interpretation of a truly remarkable record of Earth’s climate environment and life history.
On top of glaciers in places such as Greenland and Antarctica, but also in mountainous regions, snowfall accumulates layer upon layer in much the same way as marine sediment. As this snow is buried by other layers, it is gradually compacted to form ice. Ice accumulates extremely consistently, and annual layering is usually very apparent, so obtaining age control in ice is much simpler than dating sediments. In addition, ice can be dated using carbon-14 as well as very thin layers of material called ash that derives from volcanic eruptions.
So, now, down to the brass tacks. How is information on climate obtained? This is yet another really detailed and impressive story.
On completing this module, students are expected to be able to:
After completing this module, students should be able to answer the following questions (hint: each of these topics is the subject of one quiz question!):
Below is an overview of your assignments for this module. The list is intended to prepare you for the module and help you to plan your time.
Assignment | Location | |
---|---|---|
To Do |
|
|
Google the words “hockey stick,” and in addition to advertisements for sporting equipment, you will find links to one of the most chronicled and scrutinized datasets in modern science, the climate record of the last millennium. The curve was originally documented in a paper authored by Penn State Meteorology and Geoscience Professor Michael Mann and his colleagues. As shown in the figure below, the curve shows fairly constant temperatures or even a modest cooling from 1000 to about 1900 AD. About 1900, a sharp warming trend began, the base of the hockey stick. This warming has continued almost unabated to the present day. It so happens that the base of the stick corresponds to later stages of the industrial revolution, a time when the release of greenhouse gases from fossil fuels to the atmosphere spiked. The correlation of warming and the increase in levels of greenhouse gases, which are known to trap heat in the Earth’s atmosphere, is hard to argue with.
Perhaps the most scrutinized aspect of the hockey stick graph, and where the scientific controversy has focused, is that a part of the data set derives from temperature measurements that have been made since about 1900. The remainder of the curve derives from historical records and "proxy" measurements that are made indirectly. "Proxies" are substitute measurements that are made to determine the conditions at times before humans could measure climate directly. Such proxies serve as the basis for our understanding of past climate. In this module, we will learn how proxy measurements are made.
Fortunately, the history of the earth, including the evolution of its physical environment and its life forms, is beautifully preserved by a variety of Earth processes. For example, as we discussed briefly on the last page, sediments deposited in the ocean basins accumulate in a layer-cake fashion and entrap the fossil remains of once-living organisms. In addition, when these organisms, including species of clams and corals, were living, they recorded Earth’s climate history in the chemistry of layer upon layer of their shell. Images of a layered clamshell and coral skeleton are shown below.
There are three different sources of sediment to the deep sea sedimentary record. The first is terrigenous material that is delivered via rivers into the oceans. The second is dust that is blown in from the continents by winds. And the third is biogenic material, formed by organisms that lived on the surface of the ocean that died and then subsequently rained down through the water column to the seafloor. Regardless of source, these types of material are buried in the deep sea sedimentary record. The nature of the sedimentation process is such that the youngest material lies close to the sea floor, and as the sediment becomes progressively deeper, it becomes progressively older. Thus, the deep sea sedimentary record provides a beautiful inventory of ancient climate from younger to older.
As we will detail below, chemical proxies allow the climate to be reconstructed from shell materials many hundreds of millions of years old. In fact, some of the best records of Earth’s climate are obtained from mile upon mile...., sorry kilometer upon kilometer, of cores removed from the ocean floor. The current ocean basins are less than 180 million years old, meaning that only the last 180 million years of Earth’s history are preserved in the current ocean. Fortunately, large-scale Earth tectonic processes have elevated vast piles of sedimentary rocks onto the land, where they are preserved in mountains and elsewhere. These materials can be sampled in road cuts and cores and climate information gleaned from them, taking our understanding of climate history much further back.
The major ice sheets have accumulated layer by layer over time. These layers record temperature, precipitation, and wind patterns at the time the ice formed. Tiny gas bubbles trapped within the ice preserve the composition of the atmosphere including levels of CO2 and methane (CH4). Kilometer upon kilometer of cored ice and the bubbles within it preserve an incredible record of Earth's climate variations for hundreds of thousands of years. See the images shown below of the GISP2 Ice Core and a sample of atmospheric gas bubbles in ice.
Ice cores provide valuable inventories of ancient climate. This slide here shows various levels of the ice core, illustrating how the physical properties of ice change from the surface down to the bottom. Ice formed from snow that was precipitated on the top of the glacier. The uppermost panel shows the properties of snow, very loosely compacted, showing the few annual layers that we'll describe later on, in the snow, and very small trapped bubbles of air. In the middle panel, from deeper within the glacier, approximately 1838 meters down, the ice is more compacted and shows a very strong annual banding with darker layers indicating slow accumulation rates of ice during the warmer months, and lighter layers showing the rapid accumulation of ice during the winter months. In this layer, you can see, nicely trapped bubbles of gas. Close to the bottom of the glacier, where erosion of the underlying bedrock is common, the glacier has a dirty color, a brown color, because of the content of eroded minerals and rock particles in the ice. This here shows the glacier close to the base, at about 3000 meters below the surface.
The accumulating ice also preserves dust, smoke, pollen, and ash from volcanic eruptions. The texture of summer snow and winter snow is very different, and the resulting layers provide very accurate means to determine the age of ice cores, as does C-14 dating of the CO2 trapped in ice bubbles. In this way, the composition of the atmosphere can be determined hundreds of thousands of years back in the past. Imagine this, the air that your great-great-great-great grandparents breathed has been collected in some lab around the world!
Sections of ocean sediments have been sampled extensively by a suite of coring operations dating back to the 1940s. These operations include a variety of different drill ships as well as rigs like the ones that are used in oil exploration. In all cases, the retrieval of continuous cores from the seafloor requires a drill string, a continuous line of pipe that is assembled to extend from the ship or platform to the seafloor.
Once the drill string reaches the seafloor, a metal core barrel is lowered through the pipe to recover sediment. Typically, each core that is extracted is between 2 and 10 m long. At the head of the core barrel, is the bottom hole assembly, a device that is positioned to cut the core. A variety of different techniques are used to cut sediment. Where the sediment is young and soft, as occurs in younger layers close to the seafloor, cores can be cut using a piston-coring device (see figure below). The piston is situated inside the core, and it is triggered when the bottom hole assembly is close to the ocean bottom. When it hits the bottom, the piston rapidly moves up, so the mud fills the empty pipe as it sinks. At the end of the bottom hole assembly is a drill bit that actually helps cut through the core. In the case of the piston core, the bit is a sharp, knife-like device that helps the rapidly advancing core cut readily through the soupy sediment. If the formation is harder, as occurs deeper in the sediment column where the overlying burial has led to compaction and cementation, cores can only be retrieved using a hard bit that cuts the rock by rotating at high speeds assisted by water, and, in some cases, drilling mud (this is known as rotary coring). Compared to the piston core which advances in seconds or less, this rotary coring can take a very long time, often up to an hour for each meter of core; moreover, some of the sediment can be lost during coring, as pressure from water or mud must be applied to cut through the rock. Once the coring mechanism is fully extended, a core catcher slips into place underneath the coring device and a wire line is used to retrieve the core to the ship or drilling platform.
Probably the best-known and longest-running drilling operation, the Integrated Ocean Drilling Program (IODP) has been in operation for more than 40 years. The program began as the Deep Sea Drilling Project in 1969 and is a collaborative scientific operation between a number of different countries. The ship chartered by the program is the JOIDES Resolution, a highly sophisticated drilling platform that has the ability to take cores in locations where the ocean is 6000 meters deep. The ship is about 150 meters long, with a drilling derrick that is about 50 meters high. The deepest hole drilled by IODP exceeds 2000 meters. The ship can operate in stormy seas with large waves and strong currents as it is held in position by powerful turbines, and the drill pipe is stabilized by a device called a heave compensator that keeps the drill bit on target even when the ship is listing. The Resolution has berths for about 120 engineers, scientists, drilling “rough necks,” and caterers, and typically stays out to sea for six- to eight-week expeditions.
As we will see below, valuable information about Earth’s climate has been obtained by drilling in the ocean basins. However, rocks that were deposited in continental environments and marine sediments that have been uplifted onto the continents via plate tectonics also provide important information about past climates. These rocks are much more readily accessible and can be sampled in road cuts, stream beds, and many other places. In addition, sedimentary rocks on land can be sampled via coring with much less expense than their oceanic counterparts.
Samples of land materials are often taken by digging a trench to obtain fresher material beneath the weathered surface zone. If the material is indurated, samples of core are removed using a saw and samples of outcrop are removed using a hammer. Soft material usually in cores can be sampled by inserting plastic tubes into the sediment. Once the samples are taken, the material is ready for analysis.
Over the long term, cores of sediment are usually kept in refrigerators to keep them from drying out. Cores of ice are housed in warehouse-sized refrigerators that are cooled to -30oC. The longest ice core sampled is the three-kilometer long Dome C core from Antarctica that extends back some 750,000 years.
The following video provides an excellent overview of how cores are collected by the Integrated Ocean Drilling Program. Click the play button in the center of the video to watch it.
Text on screen: Exploration, Discovery, Understanding. Integrated Ocean Drilling Program. Understanding Earth History via Scientific Ocean Drilling. Part One: Collecting and Processing Sediment Cores.
Hi, my name is Tim Brock. I'm an assistant lab officer on the Joides Resolution. My job here is to supervise a group of marine lab specialists and to ensure that the core that we've brought up is processed in a timely manner and accurate manner. Joides Resolution or the JR, as we call her, is 470 feet long 70 feet wide and can drill in 29,000 feet or 8800 meters of water. The Joides Resolution and the research it supports make up the integrated ocean drilling program. It all began with the Mohole project in the 1950s and that showed deep-ocean drilling to be a viable means of obtaining geological samples. Ocean drilling has the advantage in that the samples are undisturbed from atmospheric and surface conditions, and that provides a better geological record. The sediment and the hard rock that we harvest are in the form of cores, and locked within them is no less than the history of our planet. Each expedition has been carefully chosen by an international group of scientists, for its scientific value and its location. They are currently exploring the Bering Sea and its unique formations which lie underneath its waters. Harvesting the core is an immensely complex process. I'm standing on the core receiving platform, or the catwalk, as we call it, and I'm here to explain one small part of the process: how the core gets from the rig floor to the core receiving platform, to the core laboratories. Voice over a loudspeaker: On deck, grande. Those words announce the arrival of the core barrel from its long journey from the seafloor. Contained inside are answers to the questions which the scientists seek. The cutting shoe is removed from the end of the core barrel. It contains a core catcher which secures the sediment-filled core liner inside. The core catcher is taken to a bench to be disassembled. Next, the liner is removed. The 10 meters of core is carried to the core receiving platform. The technicians must work quickly, as another coring tool is already on its way down to the seafloor. There is only a limited amount of time. The core liner must be cleaned, cut, curated, engraved, and labeled before the next core hits the rig floor. The core is measured, to be cut into 150-centimeter sections. A specialized cutting tool is used to cut the core liner. Now, the core is more easily handled. Here, a microbiologist cuts the core with a sterilized spatula and take small plugs of sediment. This sediment will be tested for the presence of microbes living at extreme environmental limits. A sample of sediment is given to a paleontologist. Microfossils found within are used to determine the age of the sediment. The remaining sediment is carefully repackaged into a section of core liner. The core liner section is sealed with colored end caps, indicating the orientation of the core. Acetone is applied to the end cap and seals the cap to the liner by chemically melting the plastic. At last, the sections are taken into the core lab. Each section has its unique core number and section number engraved into the core liner, and a barcode label attached. So, there you have it. The ship never sleeps. We drill 24 hours a day, seven days a week. This happens hundreds of times over a normal two-month expedition. Thanks for your time. I've got to get to work.
Since we cannot travel back in time to measure temperatures and other environmental conditions, we must rely on proxies for these conditions locked up in ancient geological materials.
The most widely applied proxy in studying past climate change are the isotopes of the element oxygen. Isotopes refer to different elemental atomic configurations that have a variable number of neutrons (neutrally charged particles) but the same number of protons (positive charges) and electrons (negative charges). As you might remember from your chemistry classes, protons and neutrons have equivalent masses, whereas electrons are weightless. So, because different isotopes of the same element have different weights, they behave differently in nature.
Oxygen has three different isotopes: oxygen 16, oxygen 17 and oxygen 18. These isotopes are all stable (meaning they do not decay radioactively). O-16 is by far the most common isotope in nature, accounting for more than 99.8% of all oxygen atoms, and O-17 is exceedingly rare, but O-18 is abundant enough in nature to be measured. The masses of O-16 and O-18 are different enough that these isotopes are effectively separated by natural processes. This separation process is known as fractionation. Without going into too much detail, O-16 and O-18 are fractionated by the process of evaporation as well as when minerals, including shells of animals and plants, are precipitated from water. The main driver of the evaporation effect in most geological intervals is the amount of water that has been removed from the ocean and is sequestered in ice (see video clips below). Evaporation selectively removes the lighter isotope, O-16 from water leaving higher concentrations of the heavier isotope, O-18. Thus, shells and other materials formed in the ocean tend to have more O-18 during colder, glacial intervals than during warmer intervals. However, as a portion of the evaporate ends up falling as snow which is then converted to ice, the reverse holds for ice sheets, such as those in Greenland and Antarctica. Ice formed in glacial intervals has more O-16 than ice grown in warmer times. Superimposed on the evaporation effect is a temperature effect. Shells that grow in warmer water hold more O-16 than shells that grow in colder water, as explained in the following clips:
A range of materials from the geological record demonstrates significant variations in their levels of O-16 and O-18 as a result of changes in climate. These changes are a combination of temperature and evaporation effects. The materials include shells of clams, corals, and plankton called foraminifera that inhabit the surface of the ocean, as well as cores taken from glacial ice that has formed at high latitudes (a term that describes the cooler regions closer to the poles) and high elevations, and even stalactites formed in caves. Later in this module, we will study the isotope records of several intervals of rapid climate change in the geological record.
As we have seen, there are a number of assumptions that need to be made to convert oxygen isotope values to temperature. At times when there were major ice sheets, the proxy can be difficult to apply and in fact, most applications of oxygen isotopes during cold intervals focus on changes in ice volume, not warming or cooling. Thus, it is fortunate that there are other proxies that can be applied to determine temperature. Here, we consider two of these, the ratio of magnesium to calcium in fossil CaCO3 shells and the TEX-86 ratio in organic carbon.
Organisms that construct shells of the two CaCO3 polymorphs, aragonite and calcite, include magnesium in very small amounts in their shells. As it turns out that amount is heavily dependent on temperature as a result of somewhat complex kinetic processes. The higher the temperature, the more Mg is included in the shell. Unfortunately, there are a few complications that need to be considered. Like oxygen isotopes, the Mg/Ca proxy requires that all of the calcite or aragonite is original and that it has not been altered during burial of the shell. Second, the ratios appear to differ from one species to another, thus there have to be careful calibrations for individual species. These calibrations focus on developing a relationship between the Mg/Ca of the shell and the temperature of the water in which it grew. This can only be done in living obviously. For extinct species, assumptions need to be made and they result in some uncertainty. Workers are careful to restrict their Mg/Ca analysis to individual species. Third, the Mg/Ca ratio of seawater has changed slowly over time and this ratio can impact the absolute temperatures when the focus of study extends over many millions of years. The proxy is most useful when the study interval is short, the fossil shells are unaltered and the species are still living. Since Mg/Ca is not impacted by the volume of ice as are oxygen isotopes, the proxy is often used along with the latter system to determine the volume of glaciers in cold geologic period.
TEX-86 is a complex organic (meaning material that is composed largely of carbon, nitrogen, and phosphorus) proxy. This proxy is applied to compounds made by archaea or single-celled prokaryotes (organisms that do not have a differentiated nucleus). The archaea in question live in the oceans and are called crenarchaeota, and the TEX-86 proxy is based on changes in the composition of the lipid membranes of these organisms. The detailed structure of the membrane changes with temperature. The proxy is called TEX-86 because it is based on the tetraether index in carbon-86 atoms (you do not need to remember this!). TEX-86 has been calibrated in the modern oceans, meaning the indices of samples have been compared to the temperatures of the water in which they grow. The main uncertainty of the system revolves around the fact that the organisms of interest do not live right on the ocean surface so the temperatures theoretically are lower than surface levels. In addition, it appears that the index is impacted by the growth rate of the organism. Application of the TEX-86 is restricted to marine sediments and is most useful where there are no shells to measure for oxygen isotopes or Mg/Ca. Regardless of the uncertainties, the TEX-86 proxy is extremely useful to check results from these other two systems. Where two different proxies agree for the temperature of an interval the results are much more certain.
Oxygen is not the only abundant element with isotopes that can be used as environmental proxies. The isotopes of the element carbon also are used as proxies in environmental reconstruction. Carbon has a number of isotopes including C-14, which is radioactive, and two stable (i.e. non-radioactive) isotopes, C12 and C13. C-14 are widely applied in dating recently formed natural materials that contain significant amounts of carbon such as shells, charcoal, and even materials that contain trace amounts of carbon such as pots and cloth, since its abundance can be accurately determined using an accelerator, and its rate of decay is rapid.
However, because of its rapid decay, C-14 disappears in a short time (about 100,000 years). The isotopes C-12 and C-13 are not radioactive, are common in natural materials, and are fractionated by environmental processes in a way that they can be applied as proxies. C-12 has 6 protons and 6 neutrons and is the most abundant of all of the isotopes of carbon. C-13 has 6 protons and 7 neutrons and is much less abundant than C-12, but still can be measured by mass spectrometry (the technique used to measure the abundance of stable isotopes). C-12 and C13 are strongly fractionated during photosynthesis when plants convert CO2 and sunlight into food; it requires less energy for a plant to incorporate an atom of the lighter isotope C-12 than it does an atom of C-13. In fact, when plant material is formed via photosynthesis, thousands of times more C-12 is incorporated than is C-13. As we will see, carbon isotopes are key indicators of the sources of greenhouse gas in the atmosphere both today and in the past. C isotopes can be used as proxies for nutrient cycling and deep water circulation as well as the source of plant materials.
Tree rings have been widely applied in climate reconstruction. Rings are produced as a result of seasonal variations in the growth rate of tree bark. Wood produced during rapid growth, for example in spring in temperate regions, tends to be less dense than wood produced during slower growth phases during the late summer and early autumn. Where seasonal growth is highly variable as in temperate regions, rings are strongly etched in the bark.
As shown in the image above, tree rings are especially powerful paleo climate indicators because the number of rings can be counted to determine the age of the tree or the age of the ring that is providing paleoclimate information. The width of tree rings can be interpreted in terms of temperature and precipitation variations; however, there are a number of factors that complicate this interpretation.
Fossils represent the remains of life that once thrived in the oceans or on land. They range in size from dinosaurs to shells of clams and oysters, all the way down to microscopic remains of algae, fractions of a millimeter in size. Although many groups of fossils can yield information on climate and environment, in this module we focus on the fossil remains of plants as well as those of the microplankton and microbenthos. These microfossils include a number of key groups that live in the ocean and in freshwater bodies such as lakes.
The diatoms are a group of plankton that makes their shell out of opal (microcrystalline SiO2). These organisms are autotrophic, meaning they fix CO2 dissolved in seawater or lake water via photosynthesis. The delicate diatom shell, or frustule, consists of two valves that fit together like a pillbox. The diatom cell lives inside the frustule and the plastids or chloroplasts use light as a source of energy in photosynthesis. Diatoms thrive where nutrient and silica levels are high: in lakes, the coastal ocean, and in the Southern Ocean, the ocean that circles the globe just north of Antarctica. Diatoms reproduce asexually (vegetatively), and, where conditions are right and nutrient levels elevated, diatoms can rapidly produce cell counts of many millions of cells per liter of seawater. In certain cases, as we will see in Module 7, these count levels are called red tides, and in some of these cases, diatoms produce toxins.
The dinoflagellates are a group of plankton that is mostly autotrophic but can also be heterotrophic. The dinoflagellates thrive in the coastal ocean and have a complex life cycle in which a floating, motile stage alternates with a dormant resting stage. The dinoflagellates make their tests out of cellulose, which is rarely preserved; however, the cyst stage of dinoflagellates is composed of sporopollenin, a polymer that preserves readily during burial. The dinoflagellates are the group primarily responsible for red tides; we will discuss them in detail in Module 7.
The coccolithophores, like the diatoms and dinoflagellates, are a group of marine autotrophic plankton. The coccolithophores, however, make their delicate shell out of the mineral calcite, or CaCO3, and have a more ocean-wide distribution than the diatoms. Today, the coccolithophores are adapted to live where the diatoms and dinoflagellates cannot, in parts of the ocean where nutrient levels are lower. Like the other groups, the coccolithophores are able to produce extremely large numbers of cells per liter in so-called blooms. Unlike the other two groups, the coccolithophores are threatened by ocean acidification, as we will see in Module 7.
The foraminifera are zooplankton, meaning they consume tiny phytoplankton to get their energy. The foraminifera, or forams as they are known, make their shell out of calcite or CaCO3. The delicate foram shells have a variety of shapes and are adapted to the life mode of the individual species. The forams have two very different modes of life. The planktonic forams float passively in the water column, whereas the benthic forams live on the seabed, either resting on the bottom or burrowing into the soft sediment. The foraminifera have been critical in the reconstruction of ancient ocean temperatures via stable isotopes and trace element proxies, as we will see below.
The following video provides an overview of the foraminifera.
The radiolaria are zooplankton like the foraminifera. The radiolaria are one of the longest-lived plankton groups, extending back to the early part of the Cambrian period about 500 million years before present. The group makes a spaceship-like test out of opal (SiO2) and, like the diatoms, is restricted to places in the ocean where nutrient levels are elevated. The radiolaria thrive in the tropical regions where upwelling currents bring nutrients to the surface ocean.
Plant fossils have been widely used in paleoclimate studies. These studies are based firmly on the distribution of modern plant species and on physical characteristics of those plants that are related to climate. The distribution of tropical plant species such as palms, cycads or groups of ferns can be used as evidence for warmer climates in the geologic record, for example, during the Cretaceous when these groups are found in places such as Alaska that have much colder climates today. Plants also provide more quantitative proxies. For example, the morphology of the margin and size of leaves is closely related to temperature and precipitation, respectively. Warmer climates tend to produce leaves that are smoother, whereas colder climates tend to produce leaves that are more jagged in shape. Wetter climates tend to produce leaves that are larger than drier climates with the same temperatures. Using the shape and size of leaves from modern locations with a range of temperatures and rainfall, equations relating margin and size to temperature and rainfall have been developed and these allow these climatic parameters to be determined from studies of ancient leaves.
The density of the stomata, small pores typically found on the underside of leaves that provide a pathway for CO2 to enter the leaf, are related to the partial pressure of CO2. Thus stomatal density has been used as a proxy of ancient CO2
Certain types of sedimentary rocks are, by their very nature, indicative of certain climatic conditions. For example, in the arid subtropical regions, evaporation rates are so high that waters near the margins of the oceans readily evaporate. Once salinities reach certain levels, a sequence of minerals begins to precipitate directly from seawater. These minerals include gypsum (CaSO4H2O), anhydrite (CaSO4), dolomite (CaMg(CO3)2), halite (NaCl), and sylvite (KCl). The group of minerals is known as evaporites, and when found in the geologic record, are indicative of arid climates. Today the suite of evaporite minerals is being formed in the Persian Gulf. In the past, very thick sequences of evaporites were deposited in the Mediterranean during the late Miocene (6 million years ago), and in Texas and New Mexico during the Permian (250-300 million years ago).
Just as evaporates indicate hot and dry climates, occurrences of coal in the geologic record suggest hot and wet climates. Today in tropical and subtropical regions with ample rainfall, trees, and other plants grow quickly. For example, tropical rainforest cover large areas in the Amazon basin, and mangrove forests grow along subtropical coasts in places like the Mississippi Delta and the coast of Florida. When this vegetation dies, it is rapidly buried in sediment in rivers, deltas, and beaches. Once encased beneath hundreds of meters of overlying sediment, and heated and pressurized over millions of years, the vegetation is transformed into coal. So just as evaporites are indicative of hot and dry conditions, coal is a sign of hot and wet conditions in the geological record. Please take a few moments to check out the photographic evidence below.
In the next stage of the module, we will consider a number of climate events that took place in the geological record. The events we chose to present have one thing in common; they begin very abruptly, thus they provide some lessons about modern climate change. The events include both warming and cooling intervals and span a large part of the geologic record from over 2 billion years ago to 8.2 thousand years ago.
Some of the most abrupt and dramatic climate changes occurred very recently in Earth’s past, a geologic heartbeat ago if we consider the complete 4.6 billion years of the planet’s history. Materials including sediments deposited in the deep sea, ice formed in massive glaciers, stalactites formed in caves, wooly mammoths, and other large mammals, and spores and pollen of plants, provide evidence for very large and frequent oscillations in Earth’s climate that began about 2.5 million years ago. These oscillations involve the repeated advance and retreat of glaciers in the Northern Hemisphere. At their peak, ice-covered the northern parts of North America, Europe, and Asia, and the climate fluctuations also caused major changes in vegetation and animal habitats, as well as significant changes in ocean circulation.
Glaciers deposit very diagnostic landforms and sediments that are often full of large boulders eroded from wide swaths of land over which the ice has traveled. More than a century ago, geologists determined using such evidence that at the coldest time of the Pleistocene, glaciers covered Edinburgh, Scotland; Moscow, Russia; and Detroit and Chicago in the US. In fact, from the glacial deposits alone, glaciologists had inferred several major advances and retreats of the two major ice sheets, the Laurentide in North America and the Fennoscandian in Europe and Asia.
At the height of the last major glaciation, known as the Last Glacial Maximum (LGM),18,000 years before the present, ice sheets covered Chicago, Boston, Detroit, and Cleveland (see maps below).
Our understanding of the climate of the Pleistocene surged in the 1950s when coring of sediment began in the deep sea and when the potential of oxygen isotopes in reconstructing ancient climate began to be realized. Cores showed dramatic alternations or cycles in the type of sediment with sharp color changes from red or pink to white or gray. The cycles were found to correspond to changes in the amount of the mineral CaCO3 that is derived from the shells of deep-sea organisms. The alternations were interpreted as major switches in the circulation of the deep ocean with corresponding changes in the ventilation and corrosiveness of the waters in which the sediments were deposited. Planktonic foraminifera, in different phases of the cycles, were found to have different oxygen isotope ratios that were interpreted as fluctuating sea surface temperature and glacial ice volume.
From studying cores of ice and deep-sea sediment, we now know that there have been more than 25 different advances and retreats over the last 2.5 million years. In fact, as the sediment and ice cores were gleaned, it was found that a number of the proxies fluctuated in a regular and periodic fashion. It had long been known from theoretical astronomy that the Earth’s orbit around the sun varies as a function of regular fluctuations in the shape of the orbit (called Eccentricity), the tilt of the axis of rotation (called Tilt or Obliquity), and the wobble of that axis (called Precession) (please see video below). Since these fluctuations control the amount of solar insolation received at the Earth’s surface, there was known to be a strong climate effect. These changes are cyclic with regular frequencies (the time from beginning to end of one cycle). From astronomical theory, the Eccentricity cycle is known to have a frequency of 100,000 and 400,000 years (two different cycles), Tilt/Obliquity a frequency of 40,000 years and Precession a frequency of 20,000 years. The Pleistocene proxy records were found to contain some of the same frequencies as these orbital fluctuations, and this is proof that changes in the amount of solar insolation were the ultimate control on the Pleistocene ice ages. The figure below shows an oxygen isotope record with prominent 41,000 and 1000,000-year cycles.
Credit: Five Myr Climate Change [24] from Wikimedia, licensed under CC BY-NC-ND 2.0 [25]
The video below provides an overview of how the Earth's orbit varies and how it affects climate.
The figure below shows temperature (derived from O-isotopes), atmospheric CO2 measured from gas bubbles, and dust concentrations in ice samples from the famous Vostok ice core from Antarctica. The climate fluctuations shown by these data are some of the most abrupt and regular in the geologic record. The data show a close relationship between temperature and atmospheric CO2 content that is not fully understood but probably is related to intensified ocean circulation during glacial intervals that led to vigorous upwelling in the Southern Ocean. The Southern Ocean is one of the most productive areas in the oceans and intensified upwelling and photosynthesis could have led to the increased removal of CO2 from the atmosphere. Intensified atmospheric circulation during the glacial periods is thought to have transported more dust over Antarctica causing the increase in dust concentrations.
As the study of the Pleistocene period has intensified, we now know that glacial-interglacial cycles also corresponded to:
You will notice one key thing from the temperature plot above. The temperatures were higher than today about 125,000 years ago -- this interval is known as isotope Stage 5e or more informally the last interglacial. Since this time was well before humans were producing large volumes of CO2 the warming was a result of natural processes related to Earth’s orbital configuration and insolation. So this is a key interval and one very interesting facet of it is that sea levels were higher than they are today likely by between 2 and 5 meters. Since temperatures were about 2 degrees C higher during the last interglacial, this gives us some important constraints on how fast sea level may rise in coming decades depending on CO2 emissions.
The last glacial peak occurred 18,000 years ago, and since that time the planet has been steadily warming (with a number of reversals as we will see shortly). Since these fluctuations in the Earth’s orbit continue, at some stage in the future, Earth will begin to cool. At the last glacial, maximum temperatures were considerably cooler in the high latitudes. Also, the sea level was over 120 meters lower.
These fluctuations have significant potential in informing us about future climate changes. For example, the shape of the glacial climate cycles illustrates that the warming arm is rapid but the cooling arm is much slower. This distinction tells us about the mechanics of positive and negative feedbacks in Earth's climate.
At the other end of the temperature spectrum from the PETM are the Snowball Earth events that occurred in the Proterozoic Era (543 million to 2.5 billion years ago), a time when only very primitive organisms inhabited the planet and oxygen levels were considerably lower than today. The aptly named Snowball Earth represents some of the most bizarre climate conditions ever experienced by the Earth, with strong evidence for ice sheets in equatorial locations. The evidence comes from sedimentary layers with a mixture of fine particles and large boulders, that look just like the rocks deposited by modern glaciers. These layers are found at locations known to be near the poles, which is not surprising, but also at those that are thought to have been near the equator, which is really unusual. This apparent global distribution of layers is why the events are termed "Snowball Earth". The main snowball events occurred about 2220 million years ago (2.22 billion), 710 million years ago and 640 million years ago.
Underneath and at the front of modern glaciers lie thick sediments that are characterized by a wide mixture of different grain sizes, from clay all the way up to large, angular boulders. Very similar, jumbled rocks are found in the Snowball events. It is not a surprise that such deposits exist at high latitude locations where large ice sheets have existed at a number of times in the past. What is so unusual is that glaciers are thought to have covered the tropics. Continents have been moving throughout geologic time, so it's natural to ask how we know the locations of Snowball deposits back in the Proterozoic. The best evidence for this is the orientation of magnetic grains in the sediments.
The earth has a magnetic field that is driven by motions of fluid iron in the outer core. This magnetic field acts like a simple bar magnet with motion lines traveling through the center of the earth then wrapping around the earth, pointing back towards the North Magnetic Pole, as shown in this diagram. As we can see, if one is standing close to the equator, the field will be fairly flat. Whereas if one is standing close to the pole, the field will be much more vertical. This direction is called the inclination. The magnetic field of the earth is captured in sedimentary and igneous rocks, at the time of their deposition and formation. That means geologists can capture these rocks or sample these rocks, take them to the lab and measure the field when they were formed. If the field is flat, that means the rocks were formed closer to the equator and if that inclination is closer to vertical, that means that the rocks were formed close to the poles. For the snowball earth hypothesis, it is very very interesting that many of the rocks that capture glaciation, that contained evidence of glaciation, were formed close to the equator with flat inclinations, and this is one of the major pieces of evidence that glaciers covered equatorial regions during the snowball earth.
Credit: Tim Bralower © Penn State University is licensed under CC BY-NC-SA 4.0 [3]
So how cold was the Snowball Earth? If, as it appears, the planet was covered by ice from pole to pole, all of the sun’s radiation would be reflected back to space and temperatures must have been frigid. Models suggest that the global average temperature was about -50oC and the temperature at the equator would be similar to that at the poles today, about -20oC. With these conditions, most parts of the planet would have been under ~1 km of ice. With no ability to retain heat, it is hard to imagine how Earth recovered from a Snowball event, but surprisingly the end of the glacial events appear to have been extraordinarily abrupt.
An integral part of the Snowball story is that the glacial layers are overlain by layers called cap carbonates, shown in the image to the right. These layers contain no glacial material at all, being composed entirely of the minerals calcite (CaCO3 and dolomite (CaMg(CO3). These carbonates look like they were deposited in relatively shallow water on platforms in a similar setting to the modern-day Bahamas. The carbonates often have large ripples formed by strong underwater currents, as well as evidence for bacterial activity.
The juxtaposition of glacial deposits and cap carbonates have fascinated geologists and spurred a lot of debate. Among the major questions that have been puzzled over are how glaciers could have melted so fast and how such concentrated carbonate material could have been deposited. The abrupt melting of ice must have been driven by the build-up of CO2in the atmosphere. Atmospheric CO2 is provided by processes such as volcanism, which presumably continued during the Snowball. CO2 is removed by processes such as photosynthesis and weathering of rocks, however, neither of these processes would have been active during a Snowball. Thus, it is thought that CO2would have risen unchecked to a point when a super greenhouse radiative effect was triggered, global temperatures rose rapidly, and the ice melted.
The origin of highly concentrated carbonate is likely connected to the ability of weather reactions to increase the saturation of minerals such as calcite and dolomite in the ocean. Perhaps more perplexing is how life, primitive as it was at the time, survived on a glacial planet. Available fossils suggest that organisms at the time included both cyanobacteria that used chemicals as a source of energy and also red algae that used light. Microbes are known to exist close to the surface of modern glaciers. Perhaps the ice was considerably thinner near the equator, or possibly there were small ice-free refuges. The Snowball remains a fascinating climatic puzzle.
The Paleocene-Eocene Thermal Maximum (PETM) at 56 million years before present is arguably the best ancient analog of modern climate change. The PETM involved more than 5oC of warming in 15-20 thousand years (actually a little slower than rates of warming over the last 50 years), fueled by the input of more than 2000 gigatons (a gigaton is a billion tons!) of carbon into the atmosphere. The PETM was associated with the largest deep-sea mass extinction event in the last 93 million years and remarkable diversification of life in the surface ocean and on land. Because of its potential significance, geologists have swarmed to study the event, and it's been the topic of great interest, and more than a little controversy, for the last 25 years. For this reason, also, we devote considerable attention to the event here.
The evidence for warming comes from a variety of sources, the most compelling of which is the oxygen isotopes of planktonic and benthic forams in deep-sea cores. These data suggest significant warming of 6-8o C of ocean surface waters at a range of latitudes, as well as of deep waters. We don’t need to go into the detail here, but this converts to 4-5 oC of the average Earth temperature, which is pretty significant. For comparison, global warming since the industrial revolution is about 1.2 oC. Today, warming is a result of human activities, but only very primitive primates were around at the time of the PETM, and they did not drive cars (!), so what caused the warming back then?
Corresponding to the oxygen isotope shift is a large and negative 4 to 5 per mil change in carbon isotopes that is used to define the geological extent of the event. The isotope excursion has been identified in sediments deposited in the ocean and those laid down in terrestrial environments such as lakes and rivers. It is called a golden spike because it can be correlated around the world and it marks a precise time horizon, in fact, the excursion is now the formal definition of the boundary between the Paleocene and Eocene eras. Moreover, the excursion in terrestrial sediments allows us to correlate the changes that occurred on land during the PETM with those that took place in the ocean. Below is what the PETM looks like in the Big Horn Basin in Wyoming.
The content of CO2 in the atmosphere increased 3-4 times during the PETM. Regardless of whether it comes from cars, factories or from non-human sources, CO2 is a greenhouse gas, and it causes the atmosphere to warm. As a consequence, surface ocean temperatures at the peak of the event were extremely warm, especially in the high latitudes. Off the coast of Antarctica, a location today that is close to freezing, the oceans were about 20oC (68 oF) at the peak of the PETM! Imagine jumping into the ocean from Antarctica today! Tropical ocean temperatures were really hot. A recent paper indicates that temperatures off the coast of West Africa were 36 o C which is 97 oF! Now I’ve been swimming in Miami in August and it feels like a bathtub at 88 oF, but 97 oF is virtually uninhabitable! The PETM was also associated with major changes in the properties of the deep ocean. Unlike today, when deep ocean waters are characterized by temperatures close to freezing, PETM deep waters were 10-15o C. This may not seem that warm all considering, but there is no doubt that is caused a fundamental change in the way circulation in the ocean worked. Sea level was quite a bit higher during the PETM, and the continents were in different positions, as shown on the map below.
Likely, the cause of these warm deep waters was that they came from different surface ocean locations than they do today (see Module 6 for more detail on the sources of modern deep waters), combined with the warming that took place in the surface ocean. As warmer waters hold less oxygen than cold waters, PETM deep waters in many locations likely were possibly close to a condition that is known as hypoxia (we will learn more about this in Module 6). The word hypoxia does not sound too distressing but imagine for a minute that you are a fish, and you needed oxygen for respiration. Hypoxia would have been truly awful for such a creature! Finally, the input of so much CO2 into the ocean caused ocean waters to become more acidic and led to a condition known as ocean acidification (sorry to keep jumping ahead, but we will learn a lot more about this in Module 7!). Acidification of the deep ocean during the PETM is well accepted and is observed by complete dissolution of all CaCO3 shells that rained down on the seafloor. For creatures that require a shell for protection against predators and to protect the soft cellular parts from the harsh ocean, acidification would have been disastrous. By comparison, the shallow ocean experienced a much more minor decrease in its acidity and shelly creatures continued to thrive there.
Before you read this section, please make sure that you have read the section on different fossil groups under Proxy techniques: Fossils and Rocks
Paleontologists have studied the response of many different groups of organisms in the PETM fossil record, from tiny plankton such as foraminifera in the oceans all the way up to small primates on land. The event also has a spectacular plant record, especially if you love tropical species. One of the prime areas on land is in the spectacular Big Horn Basin in Wyoming where geologists have scoured the rocks for primates and plant material and an exceptional record of the PETM exists, correlated precisely to the deep sea record using the carbon isotope shift. The land record shows the introduction of small primates and tiny ancestors to horses right near the beginning of the PETM along with a burst of tropical vegetation (refer to photos). It's almost impossible for the primates and horses to evolve so quickly, what is much more likely is that warming during the event led to the opening of land bridges in places such as Alaska and Siberia and that appearance in North America is merely an immigration from another continent such as Eurasia. The beautiful Big Horn plant record shows a change from moist subtropical vegetation before the event to dry subtropical vegetation during it. Imagine the landscape in northern Florida changing in a geologic heartbeat to that of dusty southern Texas.
The task of studying fossils is the ocean in part of the PETM is somewhat difficult. PETM cores from the deep sea are devoid of CaCO3 in the early part of the PETM as a result of acidification of the deep ocean. This acidification was so strong that shells that had already been buried on the bottom of the ocean also dissolved! The chief biological impact of the PETM was the mass extinction of deep-sea benthic forams. Approximately 35-50% of all species of this group went extinct during the event. Almost certainly, acidification was the main culprit, but it couldn’t have helped that the sea floor was often hypoxic and, in some parts of the ocean, food supply was limited. Life on the bottom of the ocean during the PETM was not fun, that is for sure!
Interestingly, the benthic forams were almost unaffected by the environmental impacts at the Cretaceous-Tertiary (K-T) boundary, a time when the dinosaurs and many other groups went extinct. This paradox reflects the fact that the PETM had a far more severe impact on deep-water environments, whereas the K-T boundary has more impact on the surface ocean and land.
Evidence for ocean acidification at the sea surface at the very outset of the PETM is more elusive, partially because the fossil materials of coccolithophores and planktonic foraminifera that would hold the evidence were dissolved as they rained down through the acidic deep ocean or lay on the seafloor. However, later in the event, the fossil record is spectacular. Coccolithophores display signs of deformation, which has been observed in modern species grown in the lab under elevated CO2. So, it could be that there was a minor amount of acidification in the ocean surface. This is confirmed by a proxy for pH based on boron isotopes. But acidification was clearly not the driver of change among animals and plants that lived at the sea surface.
Although extinction was focused in the deep sea, the PETM actually did result in abrupt changes to life in the surface ocean, one of the most dramatic of which was the occurrence of blooms of dinoflagellates in the coastal oceans.
These dinoflagellate blooms, which can be thought of as ancient red tides, are a sign of major environmental stress in the coastal zone possibly as a result of the increased runoff of water from the land. This runoff delivered nutrients from the land which resulted in a process called eutrophication which led to rapid blooms of algal growth and hypoxia on the sea floor. Elsewhere in the oceans, the environmental changes during the PETM led to shifts in the distribution of plankton groups, with tropical species invading the high latitudes and high-latitude species dwindling in abundance. As we said earlier, the tropics could have been like a warm bathtub (close to 100o F) so you can imagine all of these tiny coccolithophores and foraminifera fleeing these conditions in a mass exodus! Clearly, temperature was the big factor for surface plankton away from the coastal zone.
Another very cool aspect of the change in plankton is shown by the tropical and subtropical planktonic foraminifera, which contain a unique set of morphologies during the PETM. It’s unclear whether these morphologies represent new species or just variations of existing species, but it's pretty amazing that such variation arose in a geologic heartbeat and then disappeared at the end of the event. Quite possibly, these new forms represent the adaptation to a deeper environment, the species were forced to exodus the very surface of the ocean because of the uninhabitable conditions.
One final thing about the plankton during the PETM. At the end of the event, the distributions and abundance of different taxa reverted to close to where they were before. The structure of communities changed a little, but for the surface species, it was almost as if the event never happened. Life there was very resilient, but on the sea floor, it was a different story, life was altered forever.
The chief biological impact of the PETM was the mass extinction of deep-sea benthic forams. Approximately 35-50% of all species of this group went extinct during the event. Although extinction was focused in the deep sea, the PETM actually did result in abrupt changes to life in the surface ocean, one of the most dramatic of which was the occurrence of blooms of dinoflagellates in the coastal oceans.
These dinoflagellate blooms, which can be thought of as ancient red tides, are a sign of major environmental stress in the coastal zone, possibly as a result of the increased runoff of water from the land. Elsewhere in the oceans, the environmental changes during the PETM led to shifts in the distribution of plankton groups, with tropical species invading the high latitudes and high-latitude species dwindling in abundance. However, at the end of the event, the distributions and abundance of different taxa reverted to close to where they were before the PETM.
The burst of CO2 proposed for the PETM is consistent with the carbon isotope excursion as well as the rapid warming. Because the PETM involved very rapid warming and was caused by a burst of greenhouse gas, the event has been used to model the potential effects of modern climate change on life and the environment.
The source of the greenhouse gas is still the subject of active debate. One thing we know for sure, the PETM CO2 could not have come from humans or those early primates! There are five potential candidate causes: (1) gas hydrates in marine sedimentary rocks, (2) coals in terrestrial sedimentary rocks, (3) extensive wildfires, (4) melting of permafrost, and (5) volcanism in the North Atlantic Ocean coincident with the opening of the sea between Norway and Greenland. Each of these sources could have liberated sufficient CO2 or methane (CH4) to cause the warming and the carbon isotope excursion. This is explained in the video below.
Geologists have several ways to decide between these different mechanisms. There are independent means of determining the sources of carbon. The best way is the carbon isotope ratio of the different sources, which is very different. For example, methane hydrate has a ratio of – 60; coal and permafrost are about -20, while volcanism has a ratio of about -6. This means to cause an isotope excursion of -4 to -5, there needs to be almost 10 times more volcanic carbon than methane hydrate carbon.
Another way is the volume of carbonate dissolved by acidification in the deep sea. This can be estimated by looking at changes in the amount of CaCO3 in different sites. The third is from the magnitude of the change in pH, determined from the boron isotope proxy. Since the results from one proxy are not unique, geologists must use two proxies to constrain the source of CO2. These estimates give quite different results, unfortunately. Estimates from carbon isotopes and carbonate point to a source such as permafrost or coal, or a mix of volcanism and methane while those from boron and carbon isotopes and also point to a mixture from volcanism and one of the other sources. Thus, it is likely that there was a mixture of sources of greenhouse gas that fueled the PETM.
One of the key aspects of this debate is the evidence for some of these mechanisms actually exist. We know that there was volcanic activity in the North Atlantic, and dates from these lavas are almost precisely the same as the PETM. We know that this volcanic magma was injected through coal. There are also signs of disturbance on the sea floor off the coast of Florida that could have been caused by the release of methane hydrates. The evidence is thus stronger for volcanism, coal, and methane hydrate than it is for permafrost and wildfire, but the problem is far from solved.
One other key piece of evidence is the rate of CO2 addition. Methane hydrate, coal, and permafrost tend to be released in a more catastrophic fashion, whereas volcanism tends to be a little slower over time. If CO2 is released quickly, then a large part of it is absorbed in the surface ocean, leading to surface ocean acidification. On the other hand, CO2 added slowly generally bypasses the surface ocean and acidifies the deep. The lack of evidence for significant surface ocean acidification is more consistent with slow emission and volcanism.
One of the most important parts of the PETM is that it allows us to learn how the Earth operates in a warm mode with higher CO2 levels than today. As we learned, the current CO2 concentration in the atmosphere is 400 parts per million (ppm) and the PETM levels were likely double or triple this concentration (800-1200 ppm). Warm conditions during the height of the PETM led to increased break down of minerals, a process called weathering, and this removed CO2 from the atmosphere. It could be that increased weathering in the warm PETM atmosphere was the beginning of the end of the event. In addition, the process whereby the ocean absorbed CO2 leading to dissolution of CaCO3 would also have led to the termination of the event.
Download this lab worksheet as a Word document:Earth 103 Lab 1 Practice Word Document [30] (Please download required files below.)
In this lab, we explore the magnitude of the warming that took place during the PETM based on proxy data. We compare PETM temperatures with current temperatures. We will look at Mg/Ca, oxygen isotope, and TEX-86 proxy temperatures from a number of cores of marine PETM sediments drilled in the oceans and on land, as well as margin analysis of leaves from land sections, to determine the temperature change. If you have not read the relevant material on proxies and the PETM in this module, please go back and do this before you begin the lab.
Note: The data for this lab are provided in Google Earth files. The goals of this lab are:
If you have not worked through the Google Earth Tutorial in the Orientation [31], you must do so now.
Get used to manipulating these files in Google Earth. This short video shows you the basics of how you will need to do this for the lab (and it will be critical for future labs too):
Load the PETM Data.kmz [32] file into Google Earth. The map shows continental positions and the extent of the ocean during the PETM. The red markers show the site positions that you will use for this lab. If you click on them, you will get several pieces of information: (1) the proxy used to determine temperature; (2) the temperature just before the PETM (pre-event) and (3) the PETM temperature. I recommend you make a table similar to the example below, that shows all of the data, as well as your calculated temperature change.
Site Positions | Proxy | Pre-event Temperature | PETM Temperature | Temperature Change |
---|---|---|---|---|
Next, practice turning on the modern sea surface temperature (SST) map in the Modern SST.kmz [34] file to compate the PETM temperatures with those of today.
Note that since the continents have moved a long way in 55 million years since the PETM, there are places in the PETM ocean which now lie in the position of the modern continents and thus don’t have equivalent modern temperatures (you will see some PETM sites in the dark blue area of the modern continents). But if you turn on the PETM Paleogeography kmz file, you will see more accurate reconstructions of the margins of the continents and the locations of the sites relative to them.
After you have made the table from the data that you collected and feel comfortable comparing the data with temperatures read from the temperature kmz maps, please answer the practice questions below. Once you feel good about your answers, go to Lab 1 in Module 1. There will be two labs available to you, Lab 1 Submission (Practice) and Lab 1 Submission (Graded). You will have a chance to answer practice questions and get the correct answers before taking the Lab 1 Graded questions for credit. Please make sure you fully understand the practice questions before you start the graded questions for credit. You may want to review some of the content in Module 1 to help prepare. Keep in mind that you will only get one chance to complete the graded labs.
Which oceanic site (i.e., only sites with numbers located in the ocean basins) has the smallest temperature difference between PETM and modern?
In this module, you should have mastered the following concepts:
You should have read the contents of this module carefully, completed and submitted any labs, taken the Module Quiz and added your entry and reply in the Yellowdig discussion. If you have not done so already, please do so before moving on to the next module. Incomplete assignments will negatively impact your final grade.
One of the key findings of paleoclimate research (Module 1) is that the temperatures we are experiencing today have not been observed for about 125,000 years. The left panel in the figure above shows the recent temperature record reconstructed from proxies and observations. What you see is the sharp rate of warming Earth is experiencing today has not been recorded in the past. What is also very clear is that the rate (very fast) and amount (about 1.1 degrees C) of warming we have observed over the last 120 years cannot have been caused by natural processes, for example sunspots and volcanism. Climate models can simulate the temperature records pretty well based on the physics of the atmosphere and the amount of carbon input from various sources including fossil fuels. In the right panel of the figure above, the simulated natural inputs are shown in green, the simulated natural and current fossil fuel inputs are shown in brown, and the observed temperature curve in black (the Hockey Stick we briefly referred to in Module 1). Ranges of simulations are shown in shading. What they show very clearly is that the current rate and amount of warming cannot be caused by natural processes. The only way that this rapid rate and amount can occur in the simulations is to add fossil fuels at the rate we are adding them today. This is one of the central conclusions of the 2022 Intergovernmental Panel on Climate Change (IPCC) report, and we will come back to it in Modules 4 and 5
Earth’s climate is ever-changing and this is one of the main conclusions of Module 1. Before accurate measurements of temperature existed, we have historic records in addition to proxy records. We can observe in them the fall of dynasties as a result of climate change, for example the fall of the Mayans as a result of devastating drought in Central America in 900AD. Now, fast forward just a little, and we have multiple accounts and proxy records of two significant changes in climate that impacted medieval societies— the Medieval Warm Period (AD 950 to 1250) and the Little Ice Age (AD 1450-1850) see figure below. The Medieval Warm Period is famous because of its connection to some interesting events in the European and North Atlantic regions. During this time, it appears that wine production in Great Britain was abundant, even though in today's climate, wine grapes struggle at this high northern latitude. This is also the period of time when the Vikings colonized Greenland (they originally called it "Vinland"), indicating that this region was warmer than it is today. In a recent examination of climate proxy records from around the world, Penn State’s Michael Mann and his colleagues determined that the temperatures in the North Atlantic region were indeed warmer than the 1961 - 1990 period, but globally, the climate was not as warm as today, as can be seen in the below figure.
The Little Ice Age is similarly famous for its connections to European history. During this period, the winters in Europe were cold enough that the canals in the Netherlands froze over, allowing for skaters to travel through the countryside on these frozen pathways — this activity is recorded in some of the masterpieces of Dutch painters such as Bruegel. The Little Ice Age was a time of minor advances in many of the Alpine glaciers, and it also signaled the end of the Greenland colonization experiment. This was generally a difficult period in European history, marked by plagues, famine, fighting, and political turmoil. The cause of the Little Ice Age, like the cause of the Medieval Warm Period, is not entirely settled, but it does coincide with a period of volcanic eruptions, which should cool the climate and a period of decreased solar activity known as the Maunder Minimum. It has also been suggested that a slowdown in the thermohaline circulation in the oceans (see Modules 3 and 6) may have contributed to this cooling.
In the Medieval accounts, we can see that proxies and historic information are consistent. But the recent warming in the iconic Hockey Stick most certainly stands out in terms of its magnitude and its abruptness. In this module, we take a look at the wide range of observations that give us a sense of how the climate has been changing over past centuries but especially today. We will see the dire threats from persistent droughts, more devastating fire seasons, stronger hurricanes and melting ice sheets.
On completing this module, students are expected to be able to:
After completing this module, students should be able to answer the following questions:
Below is an overview of your assignments for this module. The list is intended to prepare you for the module and help you to plan your time.
Action | Assignment | Location |
---|---|---|
To Do |
|
|
On June 29, 2021, the mercury at Portland, Oregon reached 116oF. December 10 and 11, 2021 saw EF4 tornadoes roar through Kentucky and other states, killing 88 and wounding more than 630. In December 2021, 202 inches of snow fell in the Sierra Nevada of California. Hurricane Sandy made landfall in New Jersey on October 29th, 2012, a very late point in the year for a storm to reach the northeastern US coast. January 8th, 2013 was the hottest day ever for Australia with an average temperature for the entire continent of 40.3oC (nearly 105oF). The AVERAGE (day and night) temperature in Phoenix in June 2017 was almost 95 degrees. And nearly 61 inches of rain fell around Houston during Hurricane Harvey in August 2017. Each of these events provoked arguments in favor or against global warming. Yet, on their own, none of them were definitive proof one way or another.
To make a case for climate change, we need to find the average global climate signal, which is often difficult to discern from the noise of short-term variations and regional differences associated with what we call weather. It is very important to remember that climate is the time-averaged weather, so when we are talking about climate change, we are not talking about the weather that you experience on a daily or seasonal basis — a single heat wave is not evidence of global climate warming, just as one cold snap does not constitute global climate cooling. But repeated, unusual heat waves will shift the average temperature of a region, and this can be taken as a manifestation of a warming climate. The climate is inherently variable over time and space, so detecting a meaningful trend is a challenge that requires great care, a lot of data and often some complex statistics.
Now, a word of warning, you might find the remainder of this page a little dull! However, it happens to be one of the most essential topics of the whole issue of climate change. We absolutely have to understand the significance of trends if we are going to interpret them!
Let’s consider a hypothetical case to help you better understand the nature of the problem. The mean annual temperature is the average temperature over the course of a year, and it varies in a way that we can simulate with a randomly generated string of numbers (geoscientists often refer to such variation as “noise”) added to a constant, long-term mean temperature. We would see something like this:
In this case, you might say that if the temperature strayed out of the green zone, you have unusually hot or cold weather, and you would expect this kind of temperature excursion to be a standard feature of the natural variability of the region's climate. But, the green zone has more or less fixed limits, so the standard for calling something a heat wave does not change over time.
Now, let’s look at another hypothetical case where the climate really is warming, but there is still the natural variability or noise on a shorter timescale.
If you think of the upper edge of the green zone on the previous figure as indicating the line for defining a heat wave, look what happens in a case where there is a steady warming of the climate, with the same kind of weather causing the rapid ups and downs. If at time 0, we said that a temperature of 21°C was a heat wave, that becomes the mean climate temperature by time 60 in the above figure — so, what previously was a rare warm spell is now just the standard. This means that, by older standards, "heat waves" become more common as time goes on.
Note that, once again, we can find areas in this above figure where a shorter time period would seem to indicate cooling or warming. This reinforces the idea that we can’t really talk about climate change by looking at just a few years, and leads to the question of how much time we do need to look at to get a good understanding of climate change. In general, the longer, the better. But just to illustrate this, consider the following, where we take the same kind of hypothetical temperature record as above and systematically find the linear trends over time, with windows of varying length that slide along through the 200-year record.
Temperature is probably the most important observation regarding the global climate, but how to measure the temperature of something as large as the Earth is complex. Climate scientists have taken a variety of approaches to answering this question, depending on the timescale of interest; we’ll have a look at the results of these different approaches in this section.
The first approach is the most obvious — you use thermometer data; this is usually called the instrumental record of climate. Let’s take a look at what some of these data look like for a place familiar to the authors — State College, PA. These data come from the US Historical Climatology Network, [37] where you can find data from stations around the US. These data are the monthly mean temperatures from 1849 to 1994, so there has already been some averaging of the data to remove the day-to-day variability.
What we are seeing in the above figure is weather, which is "noisy"; what we want is the climate record from this station, which is not obvious, but we will find it in the first lab exercise for this module. The data for this one station can give us a climate record for the immediate surroundings, but going from this record at one point to the global temperature requires a bit more work.
One approach is shown in the figure below. Say you have an array of weather stations on a map:
The general approach is to draw lines between a station and all of its nearest neighbors, then find the midpoints of these lines (circles in the figure) then make a polygon that connects the circles, giving an area (in gray). This area (in km2) represents a tiny fraction, fi, of the Earth’s surface that is associated with the temperature, Ti, of this station. If you do this for all stations, and sum all the Ti x fi values from each station, you would have a global temperature (all of the fi values would add up to 1.00).
You can see from the above example that if you have good weather stations spread uniformly across the planet (land and sea) and they have been recording continuously for a long time, then one can take the mean annual temperature of each station and calculate a simple global average for each year, and thus the history of temperature change for our planet. But, as you might imagine, the stations are not uniformly distributed — they are clustered in populated countries — and the number of stations declines as you go backward in time, so the actual process of assembling an instrumental record takes some care. A variety of groups have done this using slightly different data sets and approaches. The trick here is in how you combine the individual temperature records to come up with a global average. This is complicated by the fact that some weather stations may have problems related to things like the "urban heat island effect." Man-made materials retain heat better than open land and the lack of trees also amplifies warming in cities, which are currently warming at double the rate of the global average! Thus, if urban development encroaches on a weather station, the urban heat island effect will make the local temperature rise for reasons that are unrelated to any regional climate change. Researchers have found ways of ensuring that this effect does not skew the results, and many different groups come up with results that are nearly identical, giving us confidence that the data analysis is sound. Just in case you are wondering, the mean surface temperature of the Earth as a whole is 15o C (59o F)!
There are a number of good estimates of the recent history of global temperature change, and they are shown, plotted at the same scale, in the figure below.
The figure above shows anomalies relative to the mean for 1960-1980. GISTEMP is from NASA, CRUTEM4 is from the Climate Research Unit of the University of East Anglia in England, Berkeley is from the Berkeley Earth Surface Temperature project from the University of California, and NOAA is from NOAA (no surprise here). These different groups use essentially the same data, but they have slightly different approaches to selecting which data to use and how to convert the station data into global temperatures.
It is interesting to see how similar the curves are given that they use different strategies for averaging the data, and some of the records are based on slightly different sets of weather stations. In particular, note that none of these estimates show a general cooling trend over this length of time — they all show warming. Back in the 1800s, there were fewer weather stations, and so it is more difficult to estimate global temperature back then (see figure below), but it gets steadily better as time goes on, and for the last few decades, we have excellent data due to the satellites that now circle the globe taking temperature measurements of every spot on Earth (more on this in a bit). What we see in the figure above is a detail of the blade of the "Hockey Stick" — beginning about 1900, the temperature starts to rise, then it flattens out a bit in the 1950s and early 1960s, and then it increases again at a faster pace since that time. The total warming since 1900 is about 1.1°C as a global average.
Looking in more detail at the Berkeley temperature estimate, which is based on about 1.6 billion measurements, we can see that the uncertainty, indicated in the figure below by the green band surrounding the red line, gets progressively larger as we go back in time, but the uncertainty is practically zero for more recent decades.
This is a good point to explore a question about these records. Why does the annually averaged temperature rise and fall in such a complicated fashion? The sun does not vary in its brightness in such a dramatic fashion (the solar cycle related to sunspots can account for a global temperature variation of about a tenth of a degree), and the greenhouse gases that keep our planet warm do not vary in their concentration this much. Instead, it appears that a good deal of the variability seen in these records is related to things like volcanic eruptions and climate system oscillations like the El Niño – La Niña Southern Oscillation (ENSO), which is discussed in detail in Module 6. In short, ENSO is essentially a huge, sluggish, sloshing back and forth of warm water along the equator in the Pacific Ocean — it is like a wave that reflects back and forth between the two edges of the Pacific, and it has a global reach in terms of climate. During the El Niño phase of this oscillation, the warm water is pooled up on the eastern side of the equatorial Pacific and this has the effect of making the whole Earth warmer (the reasons for this are complex, but the effect is quite clear). Conversely, during the La Niña phase, the warm water is pooled up at the western edge of the equatorial Pacific and the whole globe tends to be cooler. The El Niño stage causes flooding rains in California, wet conditions in Florida (recommend you visit Disney during the La Niña!), but crippling drought in Australia and southern Africa.
The above figure shows the last 25 years of globally averaged instrumental surface temperature measurements. Also shown is the recent history of fluctuations in ENSO and the period of atmospheric disturbance due to the eruption of Mount Pinatubo in the Philippines in 1991, one of the largest of the 20th century; the volcano injected ash and sulfur gases into the upper atmosphere, where they blocked enough sunlight to cool the global climate for a period of about 3 years.
Satellites offer another way of studying temperature changes and they are not subject to the same problems associated with weather station data — they provide a complete coverage of surface temperature on land and at sea. But, as can be seen below, there is a very good agreement between satellite measurements and the weather station data (NOAA surface in the figure below). The only problem is that the satellite data only go back to about 1980.
The figure above shows the global instrumental temperature record in blue (NASA GISTEMP) is compared to two versions of the microwave sounder [40] satellite (MSS) data of lower atmospheric temperatures (UAH from Univ. of Alabama, Huntsville; RSS from Remote Sensing Systems, Inc.). The timing of the ups and downs in the satellite record are a near-perfect match with the instrumental record, but the magnitude of change is greater according to the satellite measurements. For comparison, we show the history of the El-Niño La-Niña oscillation and periods of volcanic eruptions that load the atmosphere with tiny particles of sulfuric gas that block sunlight and cool the planet. The eruption of Pinatubo in the Philippines had a big effect, and strong El-Niño periods lead to warming — together, these two variables (volcanoes, and El-Niño) along with small fluctuations in sunlight, account for the majority of the "noise" in these records. Another important point of this figure is that it confirms that the instrumental temperature record does a good job of representing what actually happened.
Next, we look at the spatial variations in the temperature over different spans of time.
The figure above shows the difference in instrumentally determined surface temperatures between the period January 1999 through December 2008 and "normal" temperatures at the same locations, defined to be the average over the interval January 1940 to December 1980. The average increase on this graph is 0.48 °C, and the widespread temperature increase is considered to be an aspect of global warming. The most striking feature of this map is that the temperature changes have not been uniform across the globe; the high latitudes (above about 50 degrees) in the Northern Hemisphere have warmed more than any other part of the Earth, while the tropics warmed far less. But Antarctica has been warming significantly too, and, most recently in 2022 there have been record temperatures 20 °C warmer than normal!
We now turn our attention to the spatial pattern of temperature change over a much longer range of time — back to 1884. Below is an animation of the temperature change based on the instrumental record. It is worth remembering that the quality and quantity of the data get better and better as time goes on, so the early parts of this animation have more uncertainty connected to them.
The movie below is from NASA’s reconstruction of surface temperature since 1884 and it shows how Earth has warmed over the last century plus in a very, very graphic and indisputable way. Just in case you can't see this, 2016 was the warmest year on record, and 16 of the 17 warmest years have occurred since 2000!
Click here if the video above does not play [43]
Play this movie and watch as the globe becomes dominated by the yellow, orange, and red colors signifying warmer temperatures. Note that the warming is not uniform across the globe, nor is it steady through time, but the warming trend is nevertheless clear to see.
Another way of looking at this history of warming is by taking the average temperature at each latitude for each year and then stringing those along the horizontal axis, as below:
In the figure above, as in the movie above, the temperatures are given as anomalies, or differences relative to a mean established from some arbitrary period of time (1951-1980 in this case). One thing that is clear is that the polar region of the Northern Hemisphere is the area that has warmed the most — more than 6°C during this time period, when the mean global temperature has risen by a bit less than 1°C. Also clear is the fact that starting around 1990, nearly all of the globe is warming. It's pretty hard to argue with this plot, isn't it?
But if you are still unconvinced, we have another dataset up our sleeves that is completely independent of all of the atmospheric data we have shown so far: the ground has also warmed up!
We next turn our attention to a very different means of reconstructing the temperature — studies of the temperatures measured in boreholes (i.e., holes drilled into the ground) at various locations around the Earth. The temperature profiles (how temperature changes with depth) for three representative boreholes in eastern Canada are shown in the figure below:
Note that in all three cases, the temperature curves around to higher temperatures near the surface — this reflects a response of soil and bedrock to warming from the atmosphere. In the absence of warming at the surface due to climate change, these temperature profiles would tend to follow the trends represented in the lower few hundreds of meters, and this would intersect the surface at around 3-4°C.
The basic idea here is a surprising one — that the way temperature changes down a borehole at the present time tells us something about how the surface temperature has changed in the past. This is indeed a remarkable and useful reality of some fairly basic physics of heat flow. It also provides us with an excellent way to filter out the “noise” in the climate record and focus on the main trends.
Heat is just a measure of the kinetic or vibrational energy of the atoms in some substance. If something is hot, its atoms are vibrating very fast, and because vibrating atoms affect neighboring atoms, heat can be transmitted; we often talk about this heat transmission as heat flow. Heat flows from hot regions to cold regions, and the rate of heat flow is proportional to what we call the thermal gradient — the rate of temperature change with distance. In our case, for distance, we are talking about depth in the Earth, and the center of the Earth is very hot — about 5000°C. The surface, instead, is quite cool at 15°C, so heat from the Earth tends to flow out to the surface, and this process is cooling the Earth very slowly. This situation leads to a geothermal gradient (rate of change of temperature with depth) that tends to be more or less steady at around 20 or 30°C per kilometer. The heat released to the surface is tiny compared to the energy coming from the Sun, so this geothermal heat, on a global basis, does not affect the climate.
When the surface temperature rises and becomes hotter than the temperature just below the surface, heat moves down into the ground, but it does this quite slowly. When the surface temperature becomes colder, heat flows up from the ground, cooling the ground, and this cooling is transmitted downward slowly. This general idea is illustrated schematically in the figure below:
Each of the four rectangles shows the variation of temperature with depth above and below the surface at different times. At the beginning (Time 1), the temperature below the surface increases steadily, while it is constant above the surface. Then at Time 2, the surface temperature suddenly rises and is hotter than the ground right at the surface. By Time 3, the ground temperature right near the surface warms, but that warming does not penetrate very deeply. At Time 4, the surface temperature has continued to remain high, and the heat flowing down into the ground has reached a greater depth.
Rocks have a very low thermal conductivity (conductivity is the term used to describe the way heat is transported at the molecular level) compared to many other materials, which means that it can take a long time for rocks underground to respond to changes in surface temperatures. Because of the way that the heat flows through rocks, short-term changes are smoothed out as the heat diffuses through the rocks. This means that the borehole temperature profiles provide information only about changes in the long-term average temperature.
Unlike most other methods for studying paleoclimate, borehole thermometry does not need to be calibrated against the instrumental record. Hence, borehole thermometry provides an independent record of paleoclimate against which other paleoclimate techniques can be validated. Below, we see the results of the analysis of a global data set of borehole temperatures, which give us an estimate of the global temperature change.
Clearly, the shapes of the curves, or the rates of temperature change over this time period, are in close agreement, which is important since they come from very different, independent data sources. The borehole temperature reconstruction does not match the last bit of this time period, in part because the measurements begin further down the hole, and many of the measurements were made in the 1980s and 1990s before the instrumental record ends.
The oceans have absorbed over 90% of the excess heat resulting from greenhouse gas emissions since the 1970s. So if it weren’t for the ocean, the land would be a lot hotter than it is today. Since water absorbs a lot of heat, the ocean’s temperature has not increased as fast as the land’s though, but there have been some alarming trends in the last year or so (2023-2024).
The instrumental record of temperature change in the oceans goes back to about 1850 and consists of thermometer measurements made on water samples taken by merchant and navy ships as they sailed the world’s oceans. The data are understandably best for parts of the oceans along major trade routes, and they are less abundant further back in time. These measurements, just like the land-based weather station data, have to be gridded to come up with a global average sea surface temperature. As might be expected, the sea surface temperature record is similar to the global temperature records, in part because the oceans make up almost 75% of Earth’s surface. But even if we separate out the land surface temperature from the global record and compare it to the ocean surface temperature, they are quite similar, as seen in the figure below.
Although the two records are quite similar, there are some differences — the SST changes over a smaller range than the land surface temperature, and the land temperature is subject to more dramatic swings. This difference is largely due to the greater heat capacity of the oceans relative to the air — it takes a long time to heat and cool the oceans, but air temperature can change quite rapidly.
Measurements from a system of hundreds of buoys stationed throughout the oceans allow us to take the temperature of the oceans over a depth range of 2000 m. These measurements go back in time to 1955 and show that not just the surface of the oceans, but the whole upper half of the oceans are slowly warming — only about 0.1 to 0.2 °C averaged over the globe during the past 50 years — but this is a vast amount of water that has been warmed.
So, while the whole ocean has absorbed a huge amount of heat, its overall temperature has changed little. Nevertheless, the very surface of the ocean has warmed almost as much as the rest of Earth’s surface and from the middle of 2023 through to 2024 the surface warming has been quite alarming with temperatures almost a degree warmer than in 2016.
We should pause and make a point or two about these temperature reconstructions because they are very important to our understanding of how Earth's climate has been changing.
We now turn our attention to water in the atmosphere. Water is a tremendously important part of the climate system, and it has a huge influence on the weather we experience every day. Clouds are made of water droplets or tiny ice crystals, and obviously, precipitation is water; but you also can sense the hidden water vapor in the form of humidity. If you don't understand the concept of humidity, plan a trip to the Magic Kingdom in Orlando, or, worse still, New Orleans in August! As we will learn in Module 3, water is one of the most important ways of transporting energy in the climate system. When water evaporates, it takes heat energy from the surface and carries that heat with it until it condenses back into liquid water, at which point it releases that heat into the atmosphere — this is what powers energetic storms. If you watch a large fluffy cloud building up on a summer day, expanding and growing up to greater and greater heights, just remember that all of that swirling movement is driven by the energy releases from water vapor.
The evaporation of water speeds up when it gets warmer. You could confirm this by doing an experiment with two pots of water on the stove, with the burner beneath each set to a different temperature — the hotter one will always evaporate faster. The same is also true with Earth's climate system — a warmer planet means more evaporation, which means more energy added to the atmosphere. And warmer air can hold more moisture than can colder air. If we study the laws of thermodynamics, we find that for a 1°C increase in the air temperature, the atmospheric water content should increase by about 7%. Until recently, it was difficult to measure the global water content in the atmosphere, but with the advent of satellites, we can now do this.
This map above shows the relative changes in humidity (atmospheric water content) at the end of 2010 compared to the average over the period of satellite observations (1981 - 2010) — so this is a type of humidity anomaly map for the year 2011. The green areas are more humid than normal and the brown/orange areas are drier than normal. On the whole, you can see that the globe is moister than normal. You can also see that the eastern side of the equatorial Pacific Ocean is drier than normal — this is because 2010 was a La Niña year, and warm water was pooled up on the western side, cooler water on the eastern side; the cooler water evaporates less, hence the drier atmosphere above that region.
If we look at the longer record of the globe as a whole, we can see how the water content of the atmosphere has been changing over the last 40 years.
The thick green line in the figure above is the global humidity anomaly (data from NOAA), with its best fit linear trend as the blue dashed line. Over this time period, the water content has risen by about 5%. The thin blue line is the history of the El Niño - La Niña oscillation — the seesaw sloshing of warm water back and forth along the equatorial Pacific Ocean. Positive values (scale on the left) indicate an El Niño year when more of the warm water sloshes over to the eastern Pacific (the South American side); negative values mean the warm water is pooled up on the western side near Indonesia. As can be seen, some of the fluctuations in global humidity correspond to the El Niño history, with more moisture generally associated with an El Niño year. But the general trend is rising humidity, and the El Niño history does not show a similar rise — this tells us that while El Niño is important, the underlying trend is more likely related to a warmer planet.
The central point here is that a warming Earth should have a more humid atmosphere and indeed that is the case, and more water vapor means more energy in the atmosphere.
In the summer of 2021, Western Europe experienced severe and deadly flooding. The rainfall was extraordinary, possibly a 1000 year event as a storm system stalled for several days dumping 11 inches in 48 hours in eastern Belgium including 9 inches in the populated city of Liege, and up to 8 inches of rain in 9 hour period in Germany. Flooding occurred over a wide area including Belgium, Germany, Luxembourg, the Netherlands, Switzerland, and Austria. In Germany 243 people died, including 196 in Germany. In Belgium, floodwaters caused buildings to collapse and washed away cars. The floods were called the greatest natural disaster the country has ever experienced. In Germany, the Ahr river valley was particularly bad because gorges caused extensive flooding. The country’s flood warning system was cited for a monumental failure, although it appears that local authorities take some of the blame.
Fast forward to California in the winter of 2022-2023 and a series of atmospheric rivers bringing moisture from the tropics dumped huge amounts of rain on the coast and inland areas and snow on the Sierra Nevada. 78 trillion gallons of moisture fell during this time with over 30 inches and major flooding in lowland areas and up to 58 feet of snow in the mountains.
Atmospheric rivers hitting California in January 2023
Credit: https://photojournal.jpl.nasa.gov/archive/PIA25597.mp4 [48]
One near unanimous source of blame among scientists was climate change, with the flooding even exceeding current forecasts. Warmer atmosphere can hold more moisture. In addition, melting Arctic ice is causing the jet stream to weaken and this is leading the storms systems that move slower or stall. Combined these factors are leading to more extreme events including days of flooding rains.
As stated above, warm air holds more moisture than cooler air. What does a moister atmosphere mean for precipitation? It means two things — more precipitation over the globe and a higher frequency of extreme precipitation events. The spatial pattern of precipitation is complex — far more so than for temperature — and measuring the frequency of extreme events is a challenging statistical problem. But some progress has been made on these questions, at least for certain regions of the globe.
First, let's take a quick look at the precipitation pattern of the globe, which is now being measured in great detail by NASA's Aquarius satellite.
The two images above show the rainfall averages in terms of mm/day for the month of March 2019, above, and the rainfall anomaly for the same month, below.
Earlier satellites did not have the same resolution, but the record goes back to the late 1970s, allowing us to get a picture of the longer-term mean precipitation patterns, which can be seen in the video below. Watch this movie a few times through to see the annual patterns of precipitation, and focus especially on the region around India, where the summer monsoons show up beautifully.
On a global scale, there is no clear sign that the amount of precipitation is increasing, as can be seen in this next figure, which plots the global average precipitation rate for each month.
The figure above shows the history of the monthly average precipitation rates, averaged over the globe — kind of a difficult thing to swallow at first. Think of the values plotted along the y-axis as being the precipitation rate, averaged over the whole globe for a given month. As you can perhaps see, there is a strong seasonal cycle (with peaks in the fall each year) — I've removed the seasonal variation from the raw data to give the thicker blue line, showing variability that is not related to the seasonal cycle. One thing that is clear is that there is no general upward or downward trend over this period, although there are ups and downs, some of which correspond to the El Niño oscillations.
So, global precipitation, on the whole, is not going up, as far as the data reveal. What this means is that although the atmosphere is getting moister, the new addition of moisture is not coming out as precipitation — the atmosphere is retaining this extra water vapor.
But because the trend suggests there will be extra water vapor in the atmosphere in the future, when the conditions are right for a big precipitation event, the event might be bigger than today or in the past. In addition, predictions suggest there might be more storm events that exceed a certain threshold so that they are classified as extreme precipitation events.
More regionally, the El Niño cycle produces a dramatic change in precipitation patterns in parts of the globe, with Southeastern Australia becoming dry in summer and more prone to bushfires. In the US, El Niño corresponds to heavy winter rainfall in California as the so-called "Pineapple Express" picks up and transports heat and moisture from the tropical Pacific. Folks in California always hope that El Niño events put an end to drought. More on that in Module 8. El Niño is generally not a good time to visit Florida in the winter, as the same pattern extends across the southern US.
Drought is a very familiar foe in parts of the US, significant regions of Africa, and much of Australia. Drought is often called a creeping disaster, as it decimates a region slowly. We will talk a lot more about drought in Modules 8 and 9. Here, we briefly consider the recent record of drought. As we saw previously with research on drought and the Mayans, oxygen isotopes of stalactites can be interpreted in terms of precipitation. Another indicator of drought is the width of tree rings. One of the most comprehensive tree-ring data sets is shown in a compilation of precipitation in New Mexico from 137 BC to 1992 (The New York Times, The Longest Measure of Drought: 21 Centuries of Rainfall in New Mexico [51]). The data set shows that with the exception of a wet phase in the last decade, much of the 20th century was dry in New Mexico, and, in general, the data suggest that western North America has become drier over the last 1000 years.
As it turns out, there is solid evidence that the severity of droughts has increased over the long term in many parts of the world. And, as it turns out, climate models predict that drought will become a part of everyday life in many regions in the coming century. In the US, these areas include large parts of the west and south central (Kansas, Oklahoma, Texas). Especially in California, climate models suggest that warmer years in the future are more likely to also be drier years, and this will increase the severity of future droughts.
So, how is the severity of drought quantified? The most widely used method to measure drought today and in the past is the Palmer Drought Severity Index (PDSI). This index is an accounting of the balance of supply of moisture via precipitation and demand for moisture via potential evapotranspiration (PE). The PE is the amount of water that could be evaporated given an unlimited supply of water. The National Oceanic and Atmospheric Association (NOAA) publishes a map of the PDSI over the US every month (see figure below).
The PDSI has increased (become drier) for the driest parts of the globe since 1950. For most of the US, the PDSI has decreased for this time period. However, the last century has seen severe droughts in certain regions. For example, the major drought of the 1930s Dust Bowl in the Central US caused severe hardship for farmers and others in this region. The 1950s and 1980s also saw severe droughts in the Great Plains of the US. We will find out in Module 4 how models project that the severity of droughts will increase in many parts of the globe with future climate change. Moreover, we will learn more about the impact of drought on water supply in Module 8 and on food in Module 9.
Drought is currently at a critical level out west. Lake Powell, which is the second largest reservoir in the western US is currently at its lowest level in history and approaching the level where water and hydroelectric power supplies are threatened. This event would be especially devastating for communities that rely on the lake for water, such as Las Vegas, Nevada. We will come back to this issue in Module 8.
The latest (2022) report of the Intergovernmental Panel on Climate Change (IPCC), the large international team of scientists tasked with predicting our climate future and its impact, issues stern warnings about drought in the western US, Australia, the Mediterranean region and sub-Saharan Africa. Drought will have a major impact on human health, largely because of its effect on accessibility to clean drinking water, and require major adaptation of communities. Adaptation can involve conservation measures as well as finding new sources of water, for example desalinization. Such adaptation will be easier in developed countries like the US compared to the developing world, where countries like Yemen and Madagascar are already struggling with the devastating impact of drought and do not have the resources required to adapt.
Summer 2023 will be remembered for the truly devastating and catastrophic fires that destroyed the historic tourist city of Lahaina in Maui. As of the date of writing the fire caused 115 fatalities with 66 people still unaccounted for. More than 2200 buildings were destroyed and damage expected to exceed $6 billion. The cause of the tragedy is still under investigation, but it was likely sparked by electric utility poles downed by hurricane force winds. These same winds were responsible for the exceedingly rapid spread of the fires that made evacuation extremely difficult. The failure of warning systems to alert residents is also under investigation. Almost certainly, climate change is at least partially responsible for the fires. A lush tropical island, Maui like the rest of Hawaii, has become much drier since 1990 especially during the summer wet season. Hotter temperatures result in thinner clouds which hold less rainfall. Another critical factor is the area around Lahaina, once the site of sugar cane farms, is now covered by invasive grasses that acted like a tinderbox during the fires. Sadly the Lahaina tragedy is a sign of the future of many places around the globe.
Fire ravaged the waterfront in Lahaina, Maui on August 8th, 2023
Credit. https://commons.wikimedia.org/wiki/File:Os-lahaina-town-fire.jpg [59]
The 2023 summer began with smoke-filled skies across the eastern US from fires raging in Canada. The quality of the air in New York City was the worst of any large city in the world. The last five years have brought devastating fires to California and other western states! Images from the news have shown San Francisco’s air full of smoke and glowing red. Every year brings new records and new tragedies. Deadly fires are not unique to California; in fact, Australia has a history of particularly devastating fires. And climate change is going to provide all of the ingredients for more fires in the future: more fuel (from extreme rainfall events), more effective natural ignition (often dry lightning) and often, sadly, arson, as well as the conditions to keep fires burning (drought and heat). The latest IPCC report issues some stern warnings about fire, predicting that the livelihood of millions of people will be impacted by wildfires in the future. The citizens of California, Australia and other parts of the world need to get used to apocalyptic fires. The last five years have been a wake up call.
The deadly Camp fire, the most deadly and destructive fire in California history, began November 8th, 2018 from a spark from an electrical wire. By the time it was finished two weeks later, the fire had burned more than 1500,000 acres around the town of Paradise. The fire occurred late in the season and was whipped up by strong winds and abundant fuel from the previous rainy season. A virtual firestorm quickly converged on Paradise, a ridge-top town that was used to fires but had grown fast and without good evacuation planning. On November 8th, the evacuation order came too late, and many folks did not receive it because cell towers were down. Because the call was so late, evacuation could not occur in an orderly fashion, from one side of the town to the other. Hundreds of vehicles quickly jammed the three routes out of town and the fire quickly trapped many people in their cars. Folks who did not get the evacuation order were trapped in their homes. A total of 86 people in Paradise were killed and the town was almost completely burned down.
In the wake of the fire, the large utility company PG&E filed for bankruptcy to avoid major lawsuits. Turns out the company discussed turning off the power grid in the days before the fire began but decided not to. Now in the wake of the fire, companies will be much more likely to turn off the grid if there is a chance of fire and this will impact millions of customers in California.
As it turns out the Camp fire was only one of two major fires that began on November 8th. The massive Woolsey fire west of LA also began that day, burned 97,000 acres, killed three people, and forced the evacuation of 295,000 people.
The Camp and Woolsey fires are the “new normal” for California and the state must deal with difficult issues such as vegetation management, sensible development, evacuation planning, and regulation of power companies.
2020 was a record-breaking fire year in California. More acres burned than ever before (over 4.3 million), at least in recent times, with over 9200 separate fires including the first “gigafire”, the August Complex Fire that burned more than 1 million acres (over 1500 square miles)! Total damages exceeded $2.5 billion and 32 people perished. Although arson was to blame for some of the fires, more often they were sparked by lightning that occurred in thunderstorms without rain. The largest fires in Northern California, the SZU Lightning Complex fire, the LNU Lightning Complex Fire, and the August fire were all started by lightning strikes.
The fires were caused by the continued drought in the state and across the western US, and, indeed large fires occurred over much of the region as far east as Colorado and as far north as Washington state. In all over 52,000 fires consumed almost 9 million acres.
The year was also notable for fires that encroached on Seattle and especially Portland. In fact, wildfires caused an especially scary situation in the latter metropolitan area where 40,000 people had to be evacuated and 500,000 lived in evacuation warning zones at the height of the blazes. These fires in the northern part of California, Oregon, and Washington were responsible for extremely hazardous air quality for weeks, and in all over 17 million people experienced unhealthy or hazardous air this year. In fact, the air in Marion County in Oregon was so polluted it registered off the measurement scale. The air quality in Northern California including Sacramento and San Francisco was unhealthy for weeks, and young, elderly, and people with asthma, COPD, and other preexisting conditions were advised to remain indoors. The smoke can cause burning eyes and lung ailments including bronchitis and aggravate heart and lunch diseases. The elevated levels of fine particulates including hazardous compounds may also cause longer-term health issues. This public health situation was especially difficult as it superimposed on Covid-19.
2020 will be remembered for the scale of the western fires but also for their profound impact on public health. On top of the possibility of evacuation and loss of property, many residents of the western states were awakened to the prospect that their “new normal” fire season included toxic air.
2021 has already been a devastating fire season. In Oregon the Bootleg Fire has burned over 400,000 acres and destroyed more than 400 homes. The fire was started by lightning and took a month days to contain. This fire was rapidly eclipsed in size and damage by the Dixie Fire in northern California. The fire was caused by a downed power line and to date has burned over 560,000 acres and still is not contained as of mid August. The fire destroyed Greenville, a town that dated back to the Gold Rush with a population of 1000 where people were forced to rapidly evacuate as the flames approached.
The 2020 fire season in California was the worst on record. However, these are not the most devastating fires in Earth’s recent past. Black Saturday Bushfires in February 2009 in Victoria, Australia were fueled by extraordinary heat and strong winds. At the peak of the inferno, there were some 400 blazes. Conditions leading up to the fires were extraordinary. Mercury hit 116 degrees in Melbourne in a heatwave that started the week before the fire, and in the peak of the summer drought, the dry brush was perfect fuel. On the fateful Saturday, the first fire was started by arson. Falling power lines and lightning ignited other fires. Fires consumed some 1.1 million acres, destroyed whole towns, caused some $4.4 billion in damage, and killed 173 people. Most of the damage was done in the first few days, but the blazes raged for weeks. One of the astounding aspects of the fires were observations by firefighters of sideways mini-tornadoes, technically called horizontal convective rolls. As the air at the surface warmed and rose, it was forced to move in a corkscrew pattern-oriented parallel to the ground. This created bands of alternating fast and slower surface winds. Fast winds surpassed 30 mph and ignited huge swaths of land in a catastrophic fire. There are terrifying stories of people getting swept up in the flames trying to escape the inferno on tiny mountain roads.
The late part of 2019 and the early stages of 2020 have brought more intense wildfire to Australia, especially near Sydney in the southeast. The season began very early as a result of the decade long drought and intense heatwaves (see Heatwave page this module). As of this reporting fires have burned millions of acres, destroyed thousands of buildings, and killed dozens of people and millions of animals including koala and kangaroos. The smoke has caused unhealthy air in eastern Australian cities and the plume has drifted as far as New Zealand. The fires have caused a political upheaval because Australia has a large emission of carbon per capita and produces a significant amount of energy from coal. Given how susceptible it is to climate change, many citizens feel that the country is not doing enough to curb emissions. 2019-2020 is a glimpse of the future for Australians.
So, how is increased fire activity related to climate change? Fire is very much a part of the ecosystem in places like Australia, California, South Africa, and Southern Europe. Even before people were around, fires ignited by lightning occur regularly in environments with dry seasons, as nature’s way of germinating drought-resistant species and fertilizing the soil. But people have made fire events much more common. First, many fires are started by arson or in the cases of the Lahaina and Camp fires by downed electric utility poles. Clearly, heat and drought are good for fire, and as we have seen, both of these ingredients will increase in the future as a result of anthropogenic climate change. But the other key aspect of fire is fuel and which is supplied by precipitation and active growth of vegetation. Climate change is likely to cause more variability in temperature and precipitation that will create more contrast between drought and wet years. This will lead to greater fire risk. The heavy rains in California in the winter of 2016-2017 caused a significant growth of vegetation in uncultivated hills and canyons surrounding residential areas and this dried out in the hot and dry 2017 summer. Then, once the warm fall Santa Ana winds arrived, the recipe for disaster was all ready.
The main culprit for the increase in fires in the western US and Hawaii is the long-term drought. However, development in forested and brushy areas that are prone to burn. Clearing of brush in populated areas has led to the establishment of invasive plant species that grow very rapidly and provide fuel when they die back in the dry season. There has been constant friction between responsible forest management with regulation and development. The potential for gigafires will force a reckoning between planning and forestry departments in the future so that forest and brush can be managed through controlled burns. Many communities are also replanting native plant species and using animals to clear out areas overwhelmed by invasives. At the same time, growth needs to be managed so that safe evacuation can occur in the case of a large fire. Sadly, highly destructive wildfires are part of California’s future.
As are mudslides. The huge Thomas fire destroyed so much vegetation that held hill-slopes in place. They were followed in early January by major rains that led to torrents of mud from hills into valleys. The catastrophic mudslides killed at least 20 people and caused massive property damage. Fire and mud are intricately related.
One of the main messages from the 2022 IPCC report is the need to adapt to a future in which fire is more common and more deadly. In fact California is already making progress adapting its communities to live with the threat. In some areas, development is being banned in risky locations near forests for example, and best practices are being applied for clearing brush and maintaining fire breaks. Australia has been applying these best practices for a number of years also.
What is also clear from the IPCC report is that wildfires like drought will have a major impact on human health especially for children, the elderly and the poor. The impact will also be more severe in the developing world where fires set for the purpose of deforestation are having a major effect on human health as well as wiping out large numbers of species, a topic we will return to in Module 11.
Heat waves are days to week-long events with extremely high temperatures. These events are becoming more common with a changing climate, are forecasted to become frequent in many parts of the world in the future and occur earlier in the year (June 2022 has been a dress rehearsal for that!). Heatwaves are often part of extended droughts and associated with wildfires. They are major public health risks, especially for very young and older populations, as well as the poor who do not have access to air conditioning or basic hydration. In the developing world, heat waves can be very deadly, in India in 2015 more than 2200 people died due to excessive heat.
Some of the most drastic heatwaves occur in Australia, a continent which is also characterized by devastating drought and perilous wildfires. The last few years have seen major heatwaves across the continent, shattering local and continental temperature records. In 2013 Australia got so hot that they had to add new colors to the temperature map. There were days that year when the center of the continent topped 125 degrees F (52 degrees C).
In 2019 Australia broke records again in a summer of wildfires, drought, and heat. On December 19, the average high for the whole continent was 107.4 degrees F (41.9 degrees C) shattering the record set the day before by over a degree C.
In the US, heatwaves in the desert southwest including Phoenix and Las Vegas are part of a normal summer. Phoenix regularly reaches 112 degrees F (44.4 degrees C) and it has been known to exceed 120 degrees F (48.9 degrees C). These are shade temperatures, corresponding temperatures can reach 168 degrees F (76 degrees C) in the sun right above the ground level. Because concrete traps heat, cities like Phoenix get particularly hot and are prone to heatwaves. In fact, this “heat island” effect makes cities as much as 7 degrees F warmer during the day. Nighttime often does not provide much relief, with temperatures above 80 deg F for many nights in a row.
June 2021 was a sign of things to come in the northwest US and western Canada, with temperatures topping out at 116 degrees F in Portland and 108 degrees F in Seattle. Lytton in British Columbia reached 121 degrees F, the hottest temperature ever recorded in Canada. These temperatures smashed records for these cities that are normally cool in the early summer.
The heat caused a minimum of 500 fatalities in the US and Canada among vulnerable populations, including the elderly. These cities are not adapted to extreme heat, with a low percentage (about 40%) of homes having air conditioning. The heat was so extreme that up to a billion of clams and other shellfish were cooked inside their shells in an ecologic catastrophe. The heat caused major wildfires to rage throughout the area, including the town of Lytton which was virtually destroyed. Several weeks later and further south, temperatures reached 130 deg G in Death Valley, matching the world record the hottest temperature ever recorded.
July 2022 saw record breaking temperatures in Europe and notably in London where it reached 40.2 deg C or 104.4 deg F, an all-time record. That city is not adapted to temperatures that high and will have to invest significantly in areas like air-conditioning public transportation in the future.
Heatwaves will become more common and more extreme in most places in the future as the planet warms. Europe and Australia are going to experience more and more of them, as are places in India. The southern US will seem more like the Middle East in the future, with cities like Austin and El Paso becoming as hot as Dubai is today and Phoenix approaching Baghdad, Iraq temperatures. Washington DC is going to seem more like Austin in the summer, Boston will seem more like Philly and Billings, Montana more like El Paso.
More dramatically, there may be places in the Middle East and Northern India where humans may not be able to live in the future because it is impossible for the human body to cope with the searing heat. To be more precise, a wet bulb temperature that factors heat and humidity of 95 degrees F or 35 degrees C is where it is thought that a combination of kidney, heart, or even brain failure may commence, especially for vulnerable populations.
Like drought and fire, the 2022 IPCC report stresses the need for adaptation to heat. This is already taking place in the developed world where cities are reducing the heat island effect by, for example, planting more trees, making roofs green by covering with plants, and using materials for pavement that reflects heat as opposed to absorbing it. Unfortunately, these strategies are more difficult to apply in developing nations where necessities like air conditioning are also less available. Thus, it is likely that heatwaves will become increasingly deadly in coming decades.
Antarctica and Greenland, the two large ice sheets, represent some of the most bleak and hostile places on Earth. Not many geoscientists have the mettle to explore these remote places, but they remain one of the essential frontiers for research. These large and thick ice sheets look relatively homogeneous compared to other parts of the planet, but in fact, their behavior is not completely understood. The fact remains that if the ice sheets on Antarctica and Greenland were to melt, a feat that cannot happen over decades or even centennial time scales, don't worry, global sea level would rise by about 70 meters which would drown most coastal cities such as New York, Shanghai, Mumbai, and Jakarta. Just this threat should cause global leaders to stay up at night, though!
Ice is the frozen segment of the atmospheric moisture cycle. As you will remember from the discussions of Snowball Earth and the Pleistocene ice ages in Module 1, it is a very important component of Earth’s climate system — ice is the most reflective material on the surface, and as such it can exert an important control on how much sunlight the Earth absorbs, which directly affects the Earth’s temperature.
Ice is also an important and highly sensitive indicator of climate change in the polar regions and in areas of high altitude where mountain glaciers occur. Glaciers will grow or shrink in response to changes in temperature and precipitation. The temperature response is pretty obvious — glaciers melt as the temperature rises. The precipitation response is perhaps less obvious, but glaciers can expand if winter precipitation increases, and they shrink if the winter precipitation decreases. Just as the ground temperatures respond somewhat sluggishly to surface temperature changes, glaciers respond sluggishly to climate changes, which is good in the sense that they give us a better sense of the important trends in climate change that might otherwise be obscured by short-term variations. In the following pages we discuss the key different types of ice and how it is changing: glaciers in mountainous regions, sea ice in the Arctic, and the large Antarctic and Greenland ice sheets.
We’ll start with some striking images taken by glaciologists around the world — these are photos of mountain glaciers taken from the same spot at different times, and they provide us with some fairly shocking observations on just how much glaciers can change and have done so recently.
For another Alaskan example, we turn to satellite imagery of Columbia Glacier, which obviously does not reach back into time as far, but nevertheless, there are some dramatic changes evident here:
The above scene is from 1986, and at this point in time, the end of the glacier (terminus) is located down near the bottom of the image. Below, we jump forward in time to 2011:
As you can see, the terminus here has retreated by about 15 km in just 25 years — a very impressive rate.
Grinnell Glacier, in Glacier National Park, Montana, as seen from the same vantage point over a 67 year period. The glaciers in this famous national park are all in such rapid retreat that the park may need a new name in a few decades.
Glaciers in the Alps are shrinking too. Check out SwissEduc Glaciers online [77] to see one good example — at the bottom of the page is a comparison that flips back and forth from the past to the present as you move your mouse over the image.
The same story as seen in Alaska, Montana, and the Alps holds for glaciers in more tropical settings, as can be seen from the images of Qori Kalas glacier in the Andes Mountains of Peru, below.
Studies of glaciers around the world show that an overwhelming majority are losing mass over time. In most cases, this loss of mass is reflected in the glaciers' retreating, so the length of the glacier becomes smaller. The figure below shows a selection of data from glaciers around the world, documenting this pattern of retreat.
The vast majority of glaciers on Earth are melting, and this melting began about the time that the temperature records indicate the beginning of warming, around the beginning of the 1900s.
If you combine the records of glacier length changes from around the world into one graph, we can get a pretty clear idea of what is happening.
On the graph above, the y-axis plots the length of the glaciers relative to their length in 1950 — so this is a kind of length anomaly. A positive number means that on average, glaciers were longer than they were in 1950; negative numbers mean they were shorter. Here, we can see that beginning around 1850, glaciers around the world begin to shrink, and this trend continues to the present. The average glacier has retreated almost 2 kilometers in this time.
It is possible to estimate the magnitude and history of temperature change needed to produce this history of glacial retreat, and Oerlemans (2005) did this using a simple model; the results are shown below.
The thick blue line here is the temperature history needed to produce the timing and magnitude of the glacial retreat history shown in the previous figure. For comparison, we also see the instrumental temperature record in red (Jones and Moburg, 2003) and the temperature reconstruction based on multiple climate proxies (Mann et al., 1999). Note the excellent match with the instrumental record in the last century.
Just like their smaller counterparts, the huge ice sheets of Greenland and Antarctica are also shrinking. This is critical as melting of these ice sheets impacts climate through albedo feedback (Module 3) as well as global sea level (Module 10). Melting of all of the ice in these ice sheets would raise global sea level by about 70 meters (actually, Greenland would produce 6 meters and Antarctica about 60 meters). Don't worry, this isn’t going to happen any time soon, but the concern is that large glaciers on the edge of the continents are becoming increasingly unstable and will fail in coming years. The Thwaites glacier also known as the doomsday glacier is the one scientists are most concerned about. Collapse of this glacier could happen very rapidly and raise global sea level by 65 cm. We discuss this glacier in more detail in Module 10.
Given the remoteness and difficulty associated with studying these ice sheets, we only have good data on their size for the last decade, thanks to the advent of satellite systems that can monitor these glaciers. In particular, the GRACE satellite system has provided some very important data on the changes occurring in Greenland and Antarctica. This ingenious satellite system consists of a pair of satellites that are chasing each other in the same orbit around the Earth. The distance between the satellites changes according to subtle changes in gravity on the surface of Earth. As the lead satellite approaches a region of stronger gravity (due to more mass near the surface), it pulls away from the trailing satellite and then slows down as it passes the region of excess mass. In this way, the satellites can measure the subtle changes in gravity, and since the satellites pass over the same area every few days, they can detect changes in the gravity of a certain spot over time. If a big ice sheet loses mass due to melting, its gravitational effect on the satellites diminishes, and in this way, the satellites can detect the changes in the mass of these ice sheets — they are effectively “weighing” these glaciers, which is an extraordinary achievement. The results can be seen in the videos below. Note: videos do not have audio.
Download simulation [79]
Download simulation [81]
These simulations above show the time series of Greenland and Antarctic ice mass changes from GRACE satellite data. You can see that melting is concentrated near the edges of the ice sheets and occurs in fits and starts (i.e., it is not gradual). The edges of the ice sheets, the ice shelves that float on the ocean, are holding the ice sheets back in a process known as buttressing. So once the ice shelves melt fast, this speeds up the melting of the edges of the ice sheet.
Other satellites passing over these ice sheets can measure the areas where surface melting produces small ponds of melt during the melting season. Much of the meltwater freezes back into the ice, but in some places near the edges of the ice sheets, the water sinks down into crevasses and travels all the way to the bottom of the ice, where it can lubricate the base and help accelerate the flow of the ice.
The above image shows the Greenland melt anomaly, measured as the difference between the number of days on which melting occurred in 2007 compared to the average annual melting days from 1988-2006. The areas with the highest amounts of additional melt days appear in red, and areas with below-average melt days appear in blue. Although faint streaks of blue appear along the coastlines, namely in northwestern and southeastern Greenland, red and orange predominate, especially in the south.
The video below shows the dramatic loss of ice on Antarctica between 2002 and 2016. The loss is mostly around the edges of the continent and focused in a few areas.
One of the most striking examples of climate change is related to the Arctic sea ice; the video below shows the drastic changes in the extent of Arctic sea ice over the last 40 years. The ocean is now permanently open to shipping in the summer.
This visualization shows the age of the Arctic sea ice between 1979 and 2022. Younger sea ice, or first-year ice, is shown in a dark shade of blue, while the ice that is four years old or older is shown as white. The graph displayed quantifies the area covered sea ice 4 or more years old in millions of square kilometers.
The Arctic sea ice undergoes large fluctuations over the course of a year, and, like all aspects of the climate system, there is a good deal of natural variability. So, while one or two extreme years do not necessarily make a trend, they may be part of a trend. Visit NASA's Earth Observatory [85] to see a nice set of maps looking down on the North Pole showing the sea ice extent over the last 12 years; each map shows the long-term mean ice extent in a yellow line. One thing that becomes apparent is that there is much more variability in the end-of-summer minimum ice extent than the end-of-winter maximum; another thing that is apparent is that the reduction in Arctic sea ice is now a long term trend.
In the figure below, we see a summary of data on the ice extent, reported as an anomaly (departure from the mean), and it now becomes apparent that there has been a more or less steady decline since about 1970.
In addition to the reduced area of coverage, the Arctic ice is also becoming thinner. The thickness of the ice in the Arctic has been monitored for a long time by the US Navy, using submarines. Now, satellites can measure ice thickness. The comparison below shows a 40% to 50% decrease in the thickness of ice from the average over 1958-1976 compared to the present.
This reduction in the coverage of Arctic sea ice is significant since it means that during the summer months when the sun is at its brightest in the polar region, and there are 24 hours of daylight, the reflective ice cover is being reduced, allowing for the absorption of a much greater quantity of solar energy that can then warm the whole polar region. This change in sea ice coverage is not just an Arctic phenomenon — in the Antarctic, the sea ice is also decreasing in area as the map below illustrates.
Finally, a word on sea level. The melting of ice sheets and mountain glaciers is partially responsible for a significant rise in sea level---20 cm in the last 100 years. However, thermal expansion of seawater as a result of warming is equally, if not more, important. Also, since sea ice is already at ocean level, its melting does not contribute to sea level rise.
Like many Americans, I used to dream of owning coastal property. I have experienced hurricanes - I went to help friends recover from Andrew in Miami in 1992, and I lived through Hurricane Fran in NC in 1996, so I’ve seen the destruction they can cause. But I still maintained my dream. Every time I go to the beach, I pick up real estate brochures; on a cold, snowy weekend, I would look at coastal properties on Zillow. But my dream is being replaced by practicality. Coastal property is a risky investment in the 21st century! The research shows that fueled by increased heat from global warming, hurricanes are becoming stronger, slower moving and wetter, all recipes for increased devastation of coastal communities.
We are going to focus on storms from 2017, 2019, and 2020. We start by looking at the year 2017. Harvey, Irma, Maria, three massive hurricanes occurred in three weeks! The big question is whether 2017 was just an unusually active year, or if these monster storms are a new normal, the grand result of a warming planet.
Hurricane Harvey, August 2017
Each of these storms had some incredible elements. The eye of Harvey roared ashore in south Texas more than a hundred miles south of the booming metropolis of Houston, the fourth-largest city in the US with a population of close to 6 million people. The storm moved north towards the city and literally parked there for 4 or 5 days drawing moisture in from the warm Gulf of Mexico. By the time the storm moved on to the north, parts of Houston had received close to 52 inches of rain. Harvey dumped a grand total of 33 trillion gallons of water on Houston and points north, causing catastrophic flooding. Just for context, 33 trillion gallons would fill a cube with sides of 3 miles, it’s a massive amount of water! Several small creeks north of the city were over 19 feet over their banks. The storm was the single largest rain event in the lower 48 states of the US ever! The total damage, still rising as I write this, is estimated to be about $150 billion, again one of the largest ever catastrophes. Harvey caused 82 deaths in the US.
At the outset, Houston is a very flood-prone city. Houston lies on a flat plain near the ocean. The natural landscape is grassland that is drained by creeks called bayou. The city lies very close to the Gulf of Mexico, which gets very warm in the late summer, close to 87 degrees F! This warmth is transferred to the air keeping coastal areas warm and humid. The Gulf is an enormous heat and water factory. A key fact to understand is that for every one degree of temperature increase, an air mass can hold 3 % more moisture, so as the Gulf has warmed over recent years it contributes more energy and rain to hurricanes making them most intense and much wetter.
Houston might have been able to absorb this change in its natural state, but the city has grown very rapidly over the last two decades as industry has surged and jobs have been plentiful. The city prides itself on being business-friendly and as a result, it has no zoning, meaning that there are few limits on construction. A shopping mall or factory can be located right next to a housing development or a city park. As a result, the city has a massive amount of concrete, roads, parking lots, and rooftops with very little consideration paid to drainage. This contributed in a major way to flooding during Harvey as the photographs below testify. Harvey was a 1000-year event, meaning the levels of rainfall are only expected that rarely. But as it turns out, the storm was the third 500-year event in the last three years. So, it is clear that it is part of a new normal. After Harvey the city has started to make minor changes to the way it is developing requiring new homes in the floodplain to be built higher, but it still is resisting zoning measures that would keep homes out of flood-prone areas.
Hurricane Irma, September 2017
Irma arose rapidly in the tropical Atlantic and at one point it had sustained wind speeds of 185 mph. This storm intensified rapidly, which is a characteristic of a large hurricane, increasing by 45 mph in one day. hit the islands of Barbuda, Antigua, St Martin and St. Barthelemy and caused catastrophic damage in these locations. The island of Barbuda, in particular, was literally flattened. Irma next set her eyes on Cuba where it came ashore as a magnitude 5 storm with sustained winds of 160 mph and a massive storm surge. The storm led to collapsed buildings and flooding of coastal areas including the historic Malecón in Havana. Fortunately, Cuban authorities had evacuated close to a million people from low-lying areas.
A few kilometers makes a major difference in the history of a hurricane and the landfall in Cuba, which was not initially predicted, weakened the storm significantly. Initial forecasts were for Irma to come ashore near Miami with wind speeds near 155 mph, but the storm tracked a little further to the west and made landfall in the Florida Keys with maximum winds of 130 mph, then again in Marco Island on the west coast of Florida with winds of 115 mph. As it turns out, there have been far larger storms in Florida including Hurricane Andrew in 1992, which flattened the Miami suburbs. But Irma was yet another reminder of how vulnerable the state is to storms. Much of the southern half of Florida is a natural swamp or marshland that originally looked like the Everglades National Park. But as in Houston, commercial interests, and in the case of Florida, the desire of citizens to own a small piece of paradise, have led to massive construction in the last decade, runaway development with insufficient environmental regulation. So, instead of swampland that served to absorb moisture and drain it back towards the ocean, large expanses of concrete funnel it into walls of water in cities and suburbs. Mangroves forest that previously protected the coast has been flattened. In fact, the Florida environment was destroyed a long time before this as the Army Corps of Engineers modified the natural drainage to provide water for the sugar industry.
As it turns out, Irma was less of a wind event than a storm surge event. As a hurricane moves towards land, it pushes water ahead of it, literally a wall of water. The result is storm surge. Hurricane Katrina in 2005 was also a storm surge event, with a surge of 28 feet measured at Pass Christian, Mississippi, just outside of New Orleans, the largest surge ever measured in a US hurricane. New Orleans is at or even below sea level and its levee system, upgraded after Katrina, is designed to deal with surge, but still, it remains highly vulnerable. The surge from Irma was about 10 feet in the Keys and Marco Island, enough to cause significant damage. Nevertheless, the storm was generally viewed by experts as another wake-up call to what will likely happen in the future if a major storm with winds over 160 miles an hour and a 20-foot storm surge hits Miami or Tampa which is extremely flood-prone. In all, Irma caused over 100 fatalities, most of which were in the Caribbean.
Hurricane Maria, September 2017
Barely a week later, monster storm Maria developed as Irma had…this storm also developed by rapid intensification with an increase in wind speed of over 60 mph in one day! By the time it hit Dominica, Maria had sustained winds of 160 mph and it caused utter devastation and killed 15 people, then it took aim at Puerto Rico. The storm hit the east coast of the island with sustained winds of 155 mph and dumped up to 3 feet of rain in mountainous areas. The impact on Puerto Rico is like the combined effect of Harvey in Houston and Irma in Barbuda. Maria caused massive destruction on the island. The power grid was destroyed leaving all 3.4 million residents without electricity. Many people had no running water for days, and sewers and cell phone networks were also out. Dams were in danger of breaching. 60,000 homes lost most or all of their roofs and only 392 out of 5000 miles of roads remained open. The storm defoliated a large number of trees on the island and led to the loss of 80 percent of the agriculture. The total damage is estimated at $90 billion, but that does not include the misery the storm caused humans. Diseases spread due to the lack of clean drinking water. The water-borne bacterial infection leptospirosis was widespread. Overall, the storm directly or indirectly led to as many as 3000 deaths but the true number may never be known, and, years later, the island is still recovering from the storm.
Hurricane Dorian, September 2019
None of these storms compared to Dorian that hammered the Bahamas in 2019. Dorian made landfall on September 1, 2019, on Grand Abaco Island with sustained winds of 185 mph and gusts over 220 mph making it one of the strongest storms on record in the Atlantic and Pacific.
Dorian was an unusual storm in several ways. The storm was enormous. Dorian was particularly deadly because the devastating winds were combined with an extremely slow forward motion of about 5 mph so that the storm ravaged the Bahamas for days. Devastating storms like Andrew and Katrina had much faster motion but the slow speed of Dorian made the damage much, much worse. After pummeling the Abacos, a group of islands in the northeast Bahamas, the storm went back over open water and made landfall without weakening on September 2 on Grand Bahama, the largest Bahama island, where it literally stalled for a day before weakening a little and moving back over open water. The damage to the Bahamas was truly catastrophic.
At landfall on the Abacos and again on Grand Bahama, Dorian’s intense winds were accompanied by a massive storm surge of about 6-9 meters (20-25 feet) and heavy rain. In total about a meter (3 feet) of rain fell over most of the northern Bahamas. There are harrowing tales of people clinging on to trees and other harrowing survival stories, but sadly many were not so fortunate. The official death toll from Dorian is 70 but is almost certainly much, much higher because there were many undocumented citizens living in shantytowns. Initially, there were over 1000 people missing, and now that number is around 300, so the death toll is likely to be 500-600. The true number may never be known.
Hurricane Ian, late September, 2022
Now to Hurricane Ian that hit southwest Florida in late September 2022. The storm was notable because of how rapidly it intensified, with windspeeds increasing from 75 mph to a 155 mph in just two days. The very large storm came ashore at Cayo Costa island just to the north of Fort Myers on September 28th 2022. Sustained windspeeds at landfall were 150 mph likely with higher gusts. It was the fifth strongest storm ever to hit the 50 contiguous states.
But the damage in southwest Florida was not just inflicted by the wind. Since the path of the storm closely paralleled the coast as it approached land, and because the highest surge is in the right front quadrant of the storm due to its counter-clockwise circulation, the storm surge over large areas was devastating.
The storm surge was between up to 15 feet above normal sea level along the barrier islands of Captiva, Sanibel and Fort Myers Beach. This wall of water caused massive devastation in these areas as observed in the photographs below.
Aerial imagery from NOAA's National Geodetic Survey of damage in the Times Square district of Fort Myers Beach, Fla., after Category 4 Hurricane Ian struck the area.
Credit: www.noaa.gov [116]
Ian moved slowly to the northeast direction across the Florida peninsula and this slow path caused heavy rainfall over a wide area, with precipitation totals up to 17 inches over a 12-24 hour period. This rainfall caused widespread flooding well inland in places such as Orlando.
One of the main stories of the storm was prediction. The different forecast models agreed closely as the storm approached southwest Florida, but because the path was so close to parallel to the coast a small change led to a major difference in the landfall location. Two days out the path was more northerly with the eye forecasted to make landfall near Tampa, but then a day out a minor jog in the forecast to the east shifted the eye well south. This change led to some delays in evacuation in the Fort Myers area.
The storm caused massive damage over a widespread area with catastrophic damage to housing along the coast especially in Fort Myers, Sanibel Captiva and Port Charlotte. More than 2.7 million people lost power at the height of the storm and a large number without clean water. Overall the storm led to 136 fatalities in Florida and a total of $50 billion in damage.
Hurricanes and climate change
So, finally, we come back to the question of whether climate change is responsible for the surge in powerful hurricanes in 2017-2022. At the outset, we must stress that this question cannot be answered unequivocally. However, there are several factors that make it safe to say that large storms will be more common in the future and that they will cause increasing amounts of damage. To develop, storms need warm temperatures (over 80 degrees), abundant moisture, and circulation as you can see in the video below.
As we will see in the lab at the end of this module, the ocean has warmed by 1 to 2 degrees C (3-4 degrees F) over the last century, and this leads to a 12-16 percent increase in moisture (3% per degree). Thus there is a lot more fuel for hurricane development. The formation of hurricanes is also helped by weather disturbances often off West Africa, but there is not yet a relationship between these events and climate change.
In the case of Harvey, the volume of water is clearly a result of an extra warm ocean; for Irma and Maria, the ferocity of the winds and the rapid intensification is also related to water temperature. For Dorian, size and slow movement is a result of a warmer atmosphere. So, climate change is adding fuel to the fire for large hurricane development and 2017 is a harbinger of things to come. There is one other factor to consider, perhaps one that will prove the most devastating in decades to come. Sea level rise. The ocean is now about a foot higher than it was in 1900. Projections are for a possible 6-foot rise in sea level by the end of this century if we don’t cut greenhouse gas levels significantly. We will discuss the issue of sea-level rise in great detail in Module 10. Such a rise would mean that even a storm such as Irma with a moderate storm surge would be catastrophic.
As with other impacts of climate change, the latest IPCC report stresses the need for adaptation to the threat of stronger hurricanes. Coastal communities will need to adapt to this threat by building homes higher and stronger, building sea walls and surge barriers, and gradually pulling back from the coast, an initiative called managed retreat. This is already happening near New York City and elsewhere in the US but again will be much more difficult to achieve in the developing world.
The night of December 10 and early morning of December 11, 2021 saw devastating tornadoes tear across the central US from Arkansas to Kentucky. Up to 71 different tornadoes have been confirmed rated up to EF-4 with wind speeds up to 190 miles per hour. The tornadoes caused massive damage totaling $3.9 billion, destroying whole communities, causing 88 fatalities and over 600 injured. 71 tornadoes have been confirmed. The worst damage was in the town of Mayfield, Kentucky, which was leveled by a strong EF-4 tornado. Much of the center of the town was irreparably damaged with houses and stores receiving devastating damage, 22 people died in that town alone. The tornado forecasts were generally highly accurate that night, but sadly, warnings weren’t always acted upon. What was particularly significant about the outbreak is how late in the season they occurred. Tornadoes in the central US are frequent in spring, when warm and cold air-masses collide over the region. It is very unusual for these strong storms to occur in the winter.
By studying the network of weather stations across the US, researchers have found a couple of interesting results — the biggest storms recorded in a given year not including tornadoes are increasing in strength, and the frequency of extreme storms is also increasing. In addition the area known as "tornado alley" where touch-downs are common is shifting eastward from Texas, Oklahoma and Kansas to Arkansas, Louisiana, Mississippi, Alabama and Tennessee.
In addition, the frequency of extreme storms is also increasing. An extreme storm is one where the rate of precipitation exceeds by a certain amount the long-term mean rate of precipitation for a given site (so an extreme storm in a wet region like the northeast US has a much higher precipitation rate than an extreme storm for a drier region). So, you might be asking yourselves whether these results translate into more frequent, massive tornadoes. This is a somewhat controversial topic. The consensus answer is that with a warmer atmosphere, tornadoes will definitely become more powerful (just like hurricanes), but the word is still out whether they will become more frequent. What is obvious from the December 2021 tornadoes is that these powerful weather systems will occur throughout the year in the future.
Download this lab as a Word document: Lab 2: Hurricanes [125] (Please download required files below.)
In this lab, we will observe the tracks of the largest storms of the last century, and learn about the impacts of those storms on land.
The goals of the lab are:
There are two Google Earth maps to load, the first, Hurricane Tracks [126]kmz [126] file [126], shows tracks of storms from 1900 to 2017. The second, Temperature Anomalies [127]kmz [127] file [127], shows average August temperatures for each year calculated relative to the average temperature between 1900 and 1910. You can switch back and forth between maps. Both maps have sliders at the top left of the screen that allow you to look at storms as well as temperature over time. The storm tracks have points that show the wind speed and pressure at different stages in its development. We definitely recommend that you don’t try to look at the storms all at once or you will see a maze of lines. Please make sure that the slider at the top left has the relevant range of dates on it, otherwise, you will not be able to view tracks for the desired storm. Also, there are a few storms including Betsy whose names do not show unless you zoom in close. Note also, we break up the 2000-2010 and 2010-2017 decades.
As in the lab for Module 1, we begin with some practice questions that you can take in the Lab 2 practice submission, where you will receive the answers to the questions. Once you feel good about these questions, move on to the graded assignment. If you have any questions about the practice questions, please let us know. Remember, you only get one attempt at the graded assignment.
The video below will help you with operations in Google Earth. We HIGHLY recommend you watch it to learn exactly how to manipulate the files and use the historical imagery.
Video: Controls for Module 2 Lab (06:35)
Part A. In the first part of the lab, we look at the tracks of hurricanes. You will need to look for the storm names. Load the name of the storm once you have found it and click on a point to find the wind speed and pressure. For certain storms, we will include the storm surge as well as the precipitation in areas near the landfall. You will also need to observe the elevation of areas close to the coast and look at historical imagery to determine the impact of the storm on coastal communities.
A. About half of the town would be flooded
B. None of the town would be flooded
C. All of the town would be flooded
Part B. In the second part of the lab, we will observe the change in temperatures of the Atlantic Ocean over the last century that is related to the generation of more powerful hurricanes. Load and turn on the temperature anomaly kmz. By pressing the year buttons on the left, you can observe the temperature anomalies in August every five years (from 1910 to 2000) and annually from (2000 to 2017) relative to the average temperature from 1900 to 1910 file.
Center the map over the Atlantic Ocean so you can see Africa as well as North America including the Gulf of Mexico. As we have learned, the warmer the temperature the more energy to fuel hurricanes as well as the ability to hold more moisture.
A. 1965
B. 1975
C. 1985
D. 1995
E. 2005
Which year would temperatures in the Atlantic have been favorable for hurricane development?
A. 1960
B. 2017
C. 1990
D. 2000
E. 2007
What is the general trend for temperature change between 1900 and 2017?
A. Warming
B. Cooling
C. Stayed consistent
Which decade would have been slow for hurricane generation in the Atlantic based on temperatures?
A. 2000-2010
B. 1990-2000
C. 1970-1980
In this module, we have covered a broad range of observations that are pertinent to recent climate change. Here is a quick recap:
This goes back to 1880; multiple analyses yield similar results, indicating a warming of about 1-1.5°C averaged over the globe in the past 150 years. The majority of the warming has occurred in the last 50 years. The map-view pattern reveals that the polar region of the Northern Hemisphere has warmed much more than other regions of the globe.
Available for a much shorter time period, these results essentially confirm the analysis of the instrumental temperature record.
Sea surface temperature (SST) records from ships go back to 1850 and make up an important part of the data that provide global temperature estimates; these records are similar, though somewhat subdued in comparison to just land surface temperatures. The SST, though, represents just the skin of the oceans; to see deeper, we rely on measurements from a system of buoys that shows the oceans are slowly warming — only about 0.1 to 0.2 °C averaged over the globe during the past 50 years — but this is the temperature change in the whole upper 700 m of the oceans, which is a vast amount of water. So while the ocean has absorbed a huge amount of heat, its overall temperature has changed little.
Through the use of multiple proxies, the average global temperature has been reconstructed about 2000 years into the past. These results indicate a Medieval Warm Period (AD 950 – 1250) that was almost as warm as today, and a Little Ice Age (AD 1350 - 1850) that was more than a degree colder than today, followed by the modern warming trend.
The temperature versus depth measurements from boreholes preserve a smoothed record of the history of surface temperature change; global studies of these records provide a smoothed temperature history that goes back to the year 1500. This temperature history is in good agreement with the instrumental record.
Mountain glaciers from around the world are shrinking; some at astounding rates. The history and magnitude of melting indicate a warming history that closely matches the results from the instrumental surface temperature record.
Satellite data reveal the mass changes of the two large ice sheets; both are losing mass fast (2.3e15 kg of ice in 6 yr), contributing about 8 mm to sea level rise in this short period.
Submarine sonar readings and satellite measurements show that the sea ice in the Arctic Ocean is declining in thickness and in areal extent, so much so that the Northwest Passage is now open for a month or two each summer.
Based on tide gauges, this record shows that sea level has risen about 20 cm in the past 100 years (2 mm/yr). This rise is a result of the melting of ice combined with the thermal expansion of the warmer ocean. Much more on this is Module 11.
After reviewing all of the data, here is what the leading scientific academies of the US, UK, Russia, China, France, Germany, Italy, Brazil, Japan, Canada, and India (the G8 climate change roundtable first held in Davos, Switzerland in 2005) jointly concluded:
Climate change is real. There will always be uncertainty in understanding a system as complex as the world’s climate. However, there is now strong evidence that significant global warming is occurring. The evidence comes from direct measurements, of rising surface air temperatures and subsurface ocean temperatures and from phenomena such as increases in average global sea levels, retreating glaciers, and changes to many physical and biological systems. This warming has already led to changes in the Earth's climate.
This is the consensus from leading scientists around the world, not politicians, journalists, or business people who may stand to gain financially from taking a stand on climate change. The motivation of these scientists is to understand what the data mean and to help the broader public understand the implications of the data. At the end of the day, the data tell us that the climate is changing; the real challenge before us lies in finding ways to respond to this change through some combination of taking steps to minimize the change and finding ways to adapt to the change.
In this module, you should have mastered the following concepts:
You should have read the contents of this module carefully, completed and submitted any labs, the Yellowdig Entry and Reply and taken the Module Quiz. If you have not done so already, please do so before moving on to the next module. Incomplete assignments will negatively impact your final grade.
This course is all about the Earth’s climate. Thus, it is essential that you have a solid understanding of how the climate system works. This module is all about the climate system. It is by far the most technical module in the course, and our philosophy is to lay out the science in a comprehensive way, equations and all, so that you can see that Earth's climate is in part fairly simple, governed by physical relationships that describe how heat from the Sun is exchanged on the surface of the Earth and in its atmosphere. Then, there are some very complex aspects of the Earth's climate that we will not devote much time to.
Here is an example of why this module is important. The Polar Vortex has become a household name in the US in recent years. In Texas in the winter of 2021, the cold air from the vortex caused unusually cold temperatures and this crippled the power system that was not built to withstand such temperatures. The power cuts caused chaos, up to 5 million people were without power often for many days, 12 million people lost water service due to freezing pipes, and 151 people died as a result of hypothermia and carbon monoxide poisoning.
Those of us on the East Coast and Midwest of the US and our neighbors in Canada, 187 million people in all, lived through an extremely cold week at the beginning of 2014. Air temperatures, without the windchill factored in, reached -35oC in eastern Montana, South Dakota, and Minnesota. This cold was a result of the southward expansion of the polar vortex, a whirlwind of cold dense air that is normally restricted to the area around the poles. Understanding the polar vortex, and how it became unstable and swept across the Midwest and eastern parts of Canada and US, is key to interpreting the significance of the extreme cold in early 2014. Without this understanding, you might think that the expansion of cold air is a sign of cooling climate. However, it is likely that the opposite is the case; the recent cold snap is actually a result of warming. This is how it works. As you will learn in this module, the northern high latitudes are warming more rapidly than the rest of the globe as a result of melting sea ice. You will also learn that such warming leads to diminished wind velocities, including the polar vortex. As the vortex weakens, it becomes less stable and begins to wobble and stray from the region around the North Pole. It turns out that the recent cold snap was just one of these wobble events, and the projections are for polar vortices to become more common over North America in the future, just as other extreme events like extratropical hurricanes such as Sandy, heat waves and droughts become more frequent.
Now, right off the bat, we need to make it clear that the "simple" relationships are often portrayed in the module in terms of equations. You do not need to be a Math major to understand these equations, nor do we want you to memorize them. The point of showing the equations is not to cause great anxiety, but to provide an understanding of the relationship between two variables. For example, you should be looking to distinguish relationships that are linear (such as a=b*x [where * is multiplied by]) from those that are quadratic (such as a=bx2). This is the level at which we expect you to understand equations. One last word, the lab for this module is designed to strengthen the fundamentals you learn in the reading. By experimenting with climate in the lab, you should come away with a really solid understanding of the climate system.
After completing this module, students should be able to answer the following questions:
Below is an overview of your assignments for this module. The list is intended to prepare you for the module and help you to plan your time.
Action | Assignment | Location |
---|---|---|
To Do |
|
|
There is some math in this section! It is mostly algebra. You should know how to read and understand these equations, but you do not need to memorize equations.
We begin with a quick glimpse of the global climate — and then we’ll try to understand why it looks this way. But first, what does climate mean? In the simplest sense, it is the average weather of a region — the average temperature, rainfall, air pressure, humidity, cloud cover, wind direction, and wind speed. This means that climate is not the same as weather; weather implies a very short-term description of the atmospheric conditions, and it tends to change in a complex manner over short time scales, making it notoriously difficult to predict. In contrast, the climate is less variable — it smoothens out the variability of the short-term weather. This course is about climate, how it is changing, and what that means for our future; as we move through this class, you should remind yourself periodically that we are not talking about the weather — our time frame is much longer.
So, let’s have a look at the climate as expressed by temperature:
As you can see, the equatorial regions are the warmest, and the poles are the coldest, with Antarctica being noticeably colder than the Arctic. The temperature varies more within the continents than the oceans, and there is a pronounced northward extension of warm water in the North Atlantic.
The global climate system is like a big machine receiving, moving, storing, transferring, and releasing heat or thermal energy. The machine consists of the oceans, the atmosphere, the land surface, and the biota on land and in the oceans; in short, it consists of everything at the Earth’s surface. The average state of this system — the global climate — is represented most simply by the pattern of temperatures and precipitation at the surface.
In order to really understand this complex machine, we will have to understand something about its parts, but we also need to begin with some fundamental ideas about energy, heat, and temperature, including the source of the energy for the climate system — the sun.
In the broadest terms, energy is a quantity that has the ability to produce change in a physical system; it includes all kinds of kinetic energy (energy of motion) and potential energy (energy based on the body's position) and is measured in joules. One joule represents the amount of energy needed to exert a force of one Newton over a meter; so 1 Joule = 1Nm.
Energy expended over a period of time is a measure of power, and in the context of climate, power is expressed in terms of Watts (1 Watt = 1 joule per second). This is also called a heat flux — the rate of energy flow.
This is simply the thermal energy of a body, measured in joules. Think of this as the average kinetic energy (vibrations) of the atoms of a material.
This is a measure of how concentrated the energy flow is and is given in units of Watts per square meter.
This is obviously closely related to heat, but it is the average kinetic energy within some body. Materials can be the same temperature, but they may have different amounts of thermal energy — for instance, a volume of water has much more thermal energy than a similar volume of air at the same temperature. Remember that there are 3 temperature scales: Fahrenheit, Celsius, and Kelvin. We’ll use Celsius and Kelvin, which have the same scale, just offset so that 0°C = 273°K.
We begin with a very simple analog model for our planet’s climate (figure below) in which solar energy enters the system, is absorbed (some will have been reflected), stored (some will have been transformed or put to work), and then released back into outer space. The amount of energy stored determines the temperature of the planet. The balance between the incoming energy and the outgoing energy determines whether the planet becomes cooler, warmer, or stays the same. Notice the little arrow connecting the box to the Energy Out flow — this means that the amount of energy released by the planet depends on how hot it is; when it is hotter, it releases, or emits, more energy and when it is cooler, it emits less energy. What this does is to drive this system to a state where the energy out matches the energy in — then, the temperature (energy stored) is constant. This energy balance, sometimes called radiative equilibrium, is at the heart of all climate models.
Now, let’s consider the connection between this idea of an energy flow system to the actual Earth. As shown in the figure below, this system includes the atmosphere, the oceans, volcanoes, plants, ice, mountains, and even people — it is intimately connected to the whole planet. We will get to some of these other components of the climate system later, but to begin with, we will focus on just the energy flows — the yellow and red arrows shown below.
The figure above includes some new words and concepts, including short-wavelength and long-wavelength radiation, that will make sense if we devote a bit of time to a review of some topics related to energy.
The energy we are concerned with here comes in the form of electromagnetic radiation, so it will help us to review some aspects of this form of energy. Electromagnetic (EM) radiation comes in a spectrum of waves, each consisting of an electrical and a magnetic oscillation of particles called photons; this spectrum is shown in the figure below:
In the realm of physics, a blackbody [132] is an idealized material that absorbs perfectly all EM radiation that it receives (nothing is reflected), and it also releases or emits EM radiation according to its temperature. Hotter objects emit more EM energy, and the energy is concentrated at shorter wavelengths. The relationship between temperature and the wavelength of the peak of the energy emitted is given by Wien’s Law, which states that the wavelength, lambda, is:
λ= 0.0029/T (λ is in m, T in kelvins)
But the energy emitted covers a fairly broad range, as described by Planck’s Law, [133] as shown below:
The total amount of energy radiated from an object is also a function of its temperature, in a relationship known as the Stefan-Boltzmann law, which looks like this:
F=σ T 4
where σ is the Stefan-Boltzmann constant, which is 5.67e-8 Wm-2K-4 (this is another way of writing 5.67 x 10-8; so 100 is 1e2, 1000 is 1e3, one million is 1e6, etc.), T is temperature of the object in °K, and so F has units of W/m2. If you multiply this by the surface area of an object, you get the total rate of energy given off by an object (remember that Watts are a measure of energy, Joules, per second). As you can see, the amount of energy emitted is very sensitive to the temperature, and that can be seen in the figure above if you think about the area beneath the curves of different color. This sensitivity to temperature is very important in establishing the radiative equilibrium or balance of something like our planet — if you add more energy, that warms the planet, and then it emits more energy, which tends to oppose the warming effect of more energy added. Conversely, if you decrease the energy added, the planet cools and emits far less energy, which tends to minimize the cooling. This is a very important example of a negative feedback mechanism, one that works in opposition to some imposed change. The thermostat in your house is another good example of a negative feedback — it works to stabilize the temperature in your house, bringing it into radiative equilibrium.
The version of the Stefan-Boltzmann law described above applies for an ideal blackbody object, but it can easily be adapted to describe all other objects by including something called the emissivity, as follows:
F=εσ T 4
Here, epsilon is the emissivity, which is a unitless value that is a measure of how good an object is at emitting (giving off) energy via electromagnetic radiation. A blackbody has epsilon=1, but most objects have lower emissivities. A very shiny object has an emissivity close to 0, and human skin is between 0.6 to 0.8.
As mentioned earlier, an ideal blackbody will absorb all incident light, but in the real world, things absorb only part of the incident light. The fraction of light that is reflected by an object is called the albedo, which means whiteness in Latin. Black objects have an albedo close to 0, while white objects have an albedo of close to 1.0. The table below lists some representative albedos for Earth surface materials. Most of these albedos are sensitive to the angle at which the sunlight hits the surface; this is especially true for water. When the Sun is at angles of 40° and higher relative to the horizon, the albedo of the water is fairly constant, but as the angle decreases from 40°, the albedo increases dramatically, so that it is about 0.5 at a Sun angle of 10° and 1.0 at a sun angle of 0°. You are aware of this in the form of glare coming off the water in the early morning or in the evening before sunset.
Substance | Albedo (% reflectance) |
---|---|
Whole Planet | 0.31 |
Cumulonimbus Clouds | 0.9 |
Stratocumulus Clouds | 0.6 |
Cirrus Clouds | 0.5 |
Water | 0.06 - 0.1 |
Ice & Snow | 0.7 - 0.9 |
Sand | 0.35 |
Grass lands | 0.18 - 0.25 |
Deciduous forest | 0.15 - 0.18 |
Coniferous forest | 0.09 - 0.15 |
Rain forest | 0.07 - 0.15 |
Most people have an intuitive sense for the effects of albedo on reflectance and solar energy absorption. This is why people wear white clothes in hot sunny climates and dark clothes in cold sunny climates. What should you wear if it is cloudy and cold?
In the above table, we see that the Earth’s average albedo is 0.31, but there is considerable variation in this value over the surface of the Earth and over time as well — this spatial and temporal variation in albedo of the Earth is shown in the figure below.
We have already mentioned the idea of radiative equilibrium, where the incoming energy and the outgoing energy are in balance, resulting in a steady temperature, but now we are in a position to combine a few other ideas to express this notion in a simple equation that is at the heart of all climate models. Before we begin, we introduce the solar constant, which is the amount of incoming solar electromagnetic radiation [135] per unit area. Just for your information, this amount is measured on a plane perpendicular to the Sun's rays and at the mean distance from the Sun to the Earth.
We begin with the energy (in units of W/m2):
E in =S( 1−a )
Here, S is the solar constant — 1370 W/m2, and a is the albedo, which is about .31 based on satellite measurements. Then we deal with the energy out, using the Stefan-Boltzmann law:
E out =σ T4
Combining energy in (Ein) and energy out (Eout), we get:
S( 1−a )=σ T4
Now, we can solve this to find what the equilibrium temperature of our planet is:
T4 =(S( 1−a ) σ ), so T= (S( 1−a ) σ )1/4 adding numbers, T = (343( 1−.31 ) 5.67e−8 )1/4 = 254° K=− 19° C=− 2.2° F
Yikes! This is too cold — we know the mean temperature of the Earth is more like 15°C (288°K or 59°F). What have we left out? The simple answer is the emissivity, which makes sense since we know the Earth is not an ideal blackbody. (Remember that emissivity is a measure of how good an object is at emitting (giving off) energy via electromagnetic radiation; in the above, we have effectively assumed an emissivity of 1, which is for a perfect black body material). Using the equation above, let’s see what that emissivity number should be:
S(1−a )=εσ T4
ε= S(1−a ) /(σ T4) = 343(1−.31 ) ( 5.67e−8 )( 288 4 ) =0.606
So, then, even if all of these equations have you seeing stars, what does this basically mean? There is something about the Earth that prevents it from emitting as much energy as it should. What is this something? It is the greenhouse effect — the key that makes our planet a nice place to live.
It all starts with the Sun, where the fusion of hydrogen creates an immense amount of energy, heating the surface to around 6000°K; the Sun then radiates energy outwards in the form of ultraviolet and visible light, with a bit in the near-infrared part of the spectrum. By the time this energy gets out to the Earth, its intensity has dropped to a value of about 1370 W/m2 —as we just saw this is often called the solar constant (even though it is not truly constant — it changes on several timescales):
Since the Earth spins, the insolation is spread out over an area 4 times greater than the disk shown in the figure above, so the solar constant translates into a value of 343 W/m2. This is a bit less than six 60 Watt light bulbs shining on every square meter of the surface, which adds up to a lot of light bulbs since the total surface area of Earth is 5.1e14 m2. How much energy do we get from the Sun in a year? Take 1370 W/m2, multiply by the area of the disk (pi x r2 where r=radius of Earth, 6.37e6 m), and this gives us an answer in Watts, which has units of joules per second, so if we then multiply by the number of seconds in a year, then we get the total energy in joules per year. The number is staggering — 5.56e24 Joules of energy — and is 10,000 times greater than all of the energy generated and consumed by humans each year.
The insolation is not constant over the surface of the Earth — it is concentrated near the equator (first figure on the page) because of the curvature of the Earth. But, the situation is complicated by the fact that the Earth’s spin axis is tilted by 23.4° relative to a line perpendicular to the Earth’s orbital plane (see the second figure on the page), so that as Earth orbits around the Sun, the insolation is concentrated in the Northern Hemisphere (the Northern Hemisphere summer) and then the Southern Hemisphere (winter in the Northern Hemisphere). This tilt of the spin axis, also called the obliquity, is the main reason we have seasons.
The tilt of the spin axis also means that day length changes, and these changes are most dramatic at the poles, which experience 24 hours of daylight during their summers and no daylight during their winters. The varying day length, along with the angle of incidence of the Sun’s rays, combine to control the average daily insolation variation (see figure above). On a yearly average, the equatorial region receives the most insolation, so we expect it to be the warmest, and indeed it is.
Earlier, we mentioned the Solar Constant — a measure of the amount of solar energy reaching Earth. In reality, this value is not a constant because the Sun is a dynamic star with lots of interesting changes occurring. One of the best known of these changes is the solar cycle, related to sunspots. Sunspots are dark regions on the surface of the Sun related to intense magnetic activity, and measurements have shown that the greater the number of sunspots, the greater the energy output of the Sun. Early observations of these sunspots revealed a pronounced cyclical pattern to them, varying on an 11-year cycle, as shown below.
Here, in blue, we see the annual number of sunspots and in red we see the reconstructed solar intensity or Solar Constant. The reconstruction is made by studying the relationship between sunspot number and solar intensity in the last few decades, where we have good direct measurements of the solar intensity — this provides a relationship that is fairly simple and directly proportional. Higher sunspot numbers correspond to higher solar intensity. Both records are characterized by a strong 11-year cycle, often called the sunspot cycle.
The magnitude of variation in the Solar Constant, however, is quite small, and we shall see in our lab activity for this module that this amounts to a very small change in the temperature of the Earth.
When our planet absorbs and emits energy, the temperature changes, and the relationship between energy change and temperature change of a material is wrapped up in the concept of heat capacity, sometimes called specific heat. Simply put, the heat capacity expresses how much energy you need to change the temperature of a given mass. Let’s say we have a chunk of rock that weighs one kilogram, and the rock has a heat capacity of 2000 Joules per kilogram per °C — this means that we would have to add 2000 Joules of energy to increase the temperature of the rock by 1 °C. If our rock had a mass of 10 kg, we’d need 20,000 Joules to get the same temperature increase. In contrast, water has a heat capacity of 4184 Joules per kg per °K, so you’d need twice as much energy to change its temperature by the same amount as the rock.
The heat capacity of a material, along with its total mass and its temperature, tell us how much thermal energy is stored in a material. For instance, if we have a square tub full of water one meter deep and one meter on the sides, then we have one cubic meter of water. Since the density of water is 1000 kg/m3, this tub has a mass of 1000 kg. If the temperature of the water is 20 °C (293 °K), then we multiply the mass (1000) times the heat capacity (4184) times the temperature (293) in °K to find that our cubic meter of water has 1.22e9 (1.2 billion) Joules of energy. Consider for a moment two side-by-side cubic meters of material — one cube is water, the other air. Air has a heat capacity of about 700 Joules per kg per °K and a density of just 1.2 kg/m3, so its initial energy would be 700 x 1 x 1.2 x 293 = 246,120 Joules — a tiny fraction of the thermal energy stored in the water. If the two cubes are at the same temperature, they will radiate the same amount of energy from their surfaces, according to the Stefan-Boltzmann law described above. If the energy lost in an interval of time is the same, the temperature of the cube of air will decrease much more than the water, and so in the next interval of time, the water will radiate more energy than the air, yet the air will have cooled even more, so it will radiate less energy. The result is that the temperature of the water cube is much more stable than the air — the water changes much more slowly; it holds onto its temperature longer. The figure above shows the results of a computer model that tracks the temperature of these two cubes.
One way to summarize this is to say that the higher the heat capacity, the greater the thermal inertia, which means that it is harder to get the temperature to change. This concept is an important one since Earth is composed of materials with very different heat capacities — water, air, and rock; they respond to heating and cooling quite differently.
The heat capacities for some common materials are given in the table below.
Substance | Heat Capacity (Jkg-1K-1) |
---|---|
Water | 4184 |
Ice | 2008 |
Average Rock | 2000 |
Wet Sand (20% water) | 1500 |
Snow | 878 |
Dry Sand | 840 |
Vegetated Land | 830 |
Air | 700 |
Earlier, we noticed that if you do the energy balance calculation to figure out the temperature of our planet, it suggests that Earth should be -19 °C, which is 34 °C colder than the observed average global temperature of 15 °C. Why is Earth warmer than it should be? The answer lies in the greenhouse effect — gases in our atmosphere (including CO2, CH4 (methane) and H2O water vapor) trap much of the emitted heat and then re-radiate it back to Earth’s surface. This means that the energy leaving our planet from the top of the atmosphere is less than one would expect given the known temperature of our planet. As mentioned earlier, this effect can be represented in the simple energy balance equation as a term called the emissivity.
The fact of this greenhouse effect comes out of the very simple calculation we did above, but it can also be observed in great detail from satellite measurements of the infrared energy leaving Earth’s atmosphere.
As we discussed on the topic of black body radiation, the temperature of a body (a planet, for instance) gives us a sense of what the spectrum of energy should look like — that is, a range of wavelengths and intensity of radiation at those wavelengths. For the Earth, this spectrum, as seen from satellites looking down on the surface, is very different from the expected. The figure below shows the difference between the expected and the observed.
In fact, the same thing happens to the energy the Earth receives from the Sun — various gases in the atmosphere absorb that energy, so the amount we receive on the surface is less than what arrives at the top of the atmosphere.
How do gases absorb this energy? It is basically a matter of vibrations of gas molecules being in sync with some of the frequencies of energy associated with insolation or infrared energy given off by Earth. You can think of the bonds between atoms in an H2O molecule like springs that stretch, twist, and bend at specific frequencies (nice animation of H2O movement) [137], and if energy hits those molecules at just the right frequency, the bonds of the molecule absorb that energy and oscillate and stretch and twist more strongly.
There are numerous ways to demonstrate this heat-trapping ability of some gases — here is a nice laboratory demonstration of heat-trapping [138]— but you can also think of the difference between the cold nighttime temperatures when the air is dry (little water vapor) compared to the warmer nighttime temperatures when the air is humid. The fact of the greenhouse effect is one of the most important things to understand about our climate system. This greenhouse effect, which is probably better described as warming produced by heat-trapping gases, is incredibly powerful — it returns more energy to the surface than we absorb from the Sun, and its strength is closely tied to the global carbon cycle, and thus the oceans, and all the biota on Earth.
Let’s try to put a lot of this together now and have a glance at the energy budget for Earth’s climate. The figure below attempts to illustrate where all the energy goes in the climate system. We start with 100 units of energy, which represents the total amount of energy Earth receives from the Sun in a year.
When the insolation strikes the atmosphere, 23 units are reflected back to space from clouds and aerosols, which are tiny particles suspended in the atmosphere. Another 19 units are absorbed by the atmosphere, as described in the figure above, thus adding thermal energy to the atmosphere. The remaining 58 units of energy reach the Earth’s surface, where 9 units are reflected back into space, and the remaining 49 units are absorbed by the surface, warming the planet. The Earth’s surface is mostly water, and by virtue of its temperature and heat capacity, it has a lot more thermal energy than the atmosphere (271.2 vs 16.5). Energy flows up from the surface to the atmosphere in a variety of ways — mainly by emission of infrared radiation, heat transfer by evaporation, and then condensation of water. When water evaporates, it “steals” energy from the surface; this energy is needed to make the phase change from liquid to vapor, and the same energy is then released when water vapor condenses to form liquid water droplets. As you can see from the diagram, the combined flow of energy from the surface is greater than the amount we get from the sun! Of this energy given off by the surface, a little bit (7 units) escapes the atmosphere because there are no gases that absorb infrared energy at wavelengths between 10 to 15 microns; the rest is absorbed by the atmosphere, which then emits infrared energy from its top to outer space and from its bottom back to the surface; this atmospheric absorption of infrared energy and its return to the surface is called the greenhouse effect. Since the bottom of the atmosphere is much warmer than the top, much more energy is returned to the Earth’s surface than is emitted to outer space.
The remarkable thing to observe and remember here is that the surface receives almost twice as much energy from the greenhouse effect than it does directly from the Sun! But, if you look at the diagram a bit, you can see that the energy sent to the surface from the atmosphere is essentially recycled energy, whose origin is the Sun.
The view of the climate system depicted in the adjacent figure is one of stability — energy flows in and out, in perfect balance, so the temperature of the earth should stay the same. But if we can learn anything from studying Earth’s history, we learn that change is the rule and stability the exception. When change occurs, it almost always brings feedback mechanisms into play — they can accentuate and dampen change, and they are incredibly important to our climate system. There are many good examples of feedback mechanisms, but here are a few to illustrate the idea.
Ice reflects sunlight better than almost any other material on Earth, and in reflecting sunlight, it lowers the amount of insolation absorbed by Earth, which makes it colder. If the Earth becomes colder, more ice may grow, covering more area and thus reflecting even more insolation, which in turn cools the Earth further. Thus cooling instigates ice expansion, which promotes additional cooling, and so on — this is clearly a cycle that feeds back on itself to encourage the initial change. Since this chain of events furthers the initial change that triggered the whole thing, it is called a positive feedback (but note that the change may not be good from our perspective). Positive feedback mechanisms tend to lead to runaway change — some small initial change is thus accentuated into a major change.
Rocks exposed at the surface interact with water and the atmosphere and undergo a set of chemical and physical changes we call weathering. The chemical part of weathering often involves the consumption of carbonic acid (formed from water and carbon dioxide) in dissolving minerals in rocks. This process of weathering is thus a sink for atmospheric carbon dioxide, which is an important greenhouse gas. If you remove carbon dioxide from the atmosphere, you weaken the greenhouse effect and this leads to cooling of the Earth. Like many chemical reactions, this chemical weathering occurs more rapidly in hotter climates, which are associated with higher levels of carbon dioxide. So consider a scenario in which some warming occurs; this will encourage faster weathering, which will consume carbon dioxide, which will lead to cooling. In this case, the initial change triggered a set of processes that countered the initial change — this is called a negative feedback (even though it may have beneficial results) because it works in opposition to the change that triggered it.
Another important negative feedback mechanism involves the formation of clouds. On the whole, clouds in today's climate have a slight net cooling effect — this is the balance of the increased albedo due to low clouds and the increased greenhouse effect caused by high cirrus clouds. As a general rule, as the atmosphere gets warmer, it can hold more water vapor, and with more water vapor, we expect more clouds, and the increased clouds will then tend to limit the warming that initiated the increased clouds — thus we have another negative feedback mechanism.
In Asian philosophy, yin and yang can be thought of as interacting, interconnected forces that are essential components of a dynamic system. In the Earth system, positive and negative feedbacks are a bit like yin and yang — they are essential components of the whole system that ultimately play an important role in maintaining a more or less stable state. Positive feedback mechanisms enhance or amplify some initial change, while negative feedback mechanisms stabilize a system and prevent it from getting into extreme states. In many respects, the history of Earth’s climate system can be seen as a bit of a battle between these two types of feedback, but in the end, the negative feedbacks win out and our climate is generally stable with a limited range of change (excepting, of course, a few extremes such as the Snowball Earth events back around 750 Myr ago).
Earth’s climate systems are characterized by thresholds, levels that once crossed herald a new climate state. For example, we often talk about the 1.5 or 2oC thresholds across which the impacts of climate change become dangerous, for example including long heatwaves, devastating droughts and more common extreme weather events. Thresholds can become tipping points if, once crossed, there is no going back from the new climate state, at least temporarily. One of the best examples of a tipping point is the cessation in Atlantic meridional overturning circulation (AMOC) which we will learn about in Module 6 and which is a key driver of heat transport around the globe. This circulation drives the formation of ocean deep waters which in return feeds the Gulf Stream which warms northern latitudes including Western Europe making them more habitable. It also fuels monsoon rains in places like India. In the past, AMOC has turned off, driving Northern Europe into an ice age. The system is highly complex and difficult to predict, but there have been warning signs that it has been edging closer to that potentially devastating tipping point in recent decades. The system becomes more variable as a tipping point is reached, and that variability is currently showing signs of increasing. Tipping points may involve positive feedbacks. For example, melting and disintegration of the West Antarctic Ice sheet will lower planetary albedo resulting in further melting, and at some stage the feedback will make the system irreversible, at least temporarily. Another example is the melting of permafrost in the Arctic region. Tipping points can lead to cascading changes if they impact one another, for example, significant melting of Antarctic ice can cause enough warming to exacerbate permafrost melting. More examples of tipping points are showing in the figure below.
Tipping points represent one of the most dire threats of climate change due to their irreversible nature. Because of this, they are very much a key driver of the warming thresholds and emissions targets.
The diagram we have just been considering (repeated above), presents a good overview of how energy flows through the Earth’s climate system, but it does not give us a sense of how that energy is distributed across the surface of the globe and there are some important things to be learned from looking at this spatial pattern. For many years now, satellites have been monitoring these energy flows using spectrometers that measure the intensity of energy at different wavelengths flowing to the Earth and from the Earth. So, let’s see what can be learned from a quick study of these satellite views. First, we consider the insolation at the top of the atmosphere averaged for the month of March.
Of course, not all of this insolation strikes the surface — remember that just 49% of it reaches the ground. If we then look at the insolation reaching the ground, we see the following:
Notice that the highest flux is about 190 W/m2; far less than the maximum of almost 440 W/m2 that reaches the top of the atmosphere. The difference is due to reflection from clouds, reflection from the surface, and absorption by atmospheric gases.
Now, let’s look at what comes back from the Earth, in the form of long wavelength energy, for the same time period.
At the simplest level, we see that the tropics emit much more energy than the poles. This makes sense since we know they are warmer, and the Stefan-Boltzmann law tells us that the amount of energy emitted varies as the fourth power of temperature, and the tropics are warmer because they receive much more insolation (see figures above). Looking closely at the nearest above image, we see some interesting variations near the tropics — look at South America, Central Africa, and Indonesia, where the emitted energy is far less than we see elsewhere at these same latitudes. Why is this? Is it colder there? No, it is not colder there, which leads to another question — is the atmosphere above these regions absorbing more of the infrared energy emitted by the surface? Recall that one of the main heat-absorbing gases is water, and where you have a lot of water, you have a lot of clouds. So, let’s have a look at the typical average cloud cover for this time of year.
So, it is indeed the case that the amount of energy leaving the Earth varies according not only to the temperature but also to the concentration of heat-absorbing gases such as water.
Recall that we are focused on the energy budget here and whenever you do a budget, at the end, you look at the balance between what is coming in and what is going out. So, let’s do that now with the energy as measured by the satellites.
As can be seen in the figure above, the tropics receive more energy than they emit, while the poles emit more than they receive. This picture can also be seen in a somewhat simpler diagram in which we average the net energy flow at each latitude.
The atmosphere and oceans are constantly flowing, and this motion is critical to the climate system. What makes them flow? In general, the movement is due to pressure differences — things flow from regions of high pressure to low pressure and the resulting surface winds distribute heat at the Earth's surface.
The pressure changes are themselves due to density and height differences — higher density in the air or the oceans leads to higher pressures. The density differences are due to changes in composition and temperature; this works slightly differently for air and water. In air, the important compositional variable is water vapor content — more water means lower density air. When we say more water, we mean that for a given number of molecules in a volume of air, a greater percentage of them are water, and water, as shown below, is lighter than a nitrogen molecule, which is the most abundant molecule in our atmosphere.
Higher temp = lower density → rising air
Weight of H2O= 18
Lower temp = higher density → sinking air
Weight of N2 = 28
More water = lower density → rising air
Less water = higher density → sinking air
As indicated above, density differences can cause either rising or sinking of air masses. Because Earth’s gravity decreases as you move away from the surface, there is a kind of equilibrium profile of density with height above the surface, as shown by the green curve below:
If we lower the density of air at the surface from A to B, then the air rises from B to C. Then, if we increase the density of air at point C, moving it to D, it will sink back down to point A near the surface.
We start with the movement of the atmosphere, which we will try to make as simple as possible by first concentrating on the flow as seen in a vertical slice from pole to pole. The story begins at the equator, where air is warmed and lots of evaporation adds water to the air, giving it a low density:
This air rises until it gets to the top of the tropopause, which is a bit like a lid on the lower atmosphere. It then diverges, with some of the air flowing north and some flowing south. As it rises and moves away from the equator, the air gets colder, water vapor condenses and rains out and the air grows drier — the cooling and drying both make the air grow denser and by the time it reaches about 30°N and 30°S latitude, it begins to sink down to the surface.
The sinking air is dense and dry, creating zones of high pressure in each hemisphere that are associated with very few clouds and rainfall — these are the desert latitudes. The sinking air hits the ground and then diverges. Some flows south and some flows north; the parts of this divergent flow that return towards the equator complete a loop or a convection cell, a Hadley Cell, named after Hadley, a famous meteorologist. Now, let’s turn our attention to the air that flows away from the equator. Moving along the surface, it warms and picks up water vapor, and so, its density decreases, and it eventually rises up when it gets to somewhere between 45° and 60° latitude in each hemisphere.
Once again, the rising air runs into the tropopause (which is lower at these higher latitudes) and diverges, with some of it returning toward the equator, thus completing another convection cell called the Ferrel Cell. The air that flows pole-ward sinks down at the poles, creating yet another convection cell known as the Polar Hadley Cell. These convection cells create bands of low and high pressure that roughly follow lines of latitude that exert a big influence on the climate at different latitudes. The air flowing within these convection cells does not simply move north and south as depicted above — the Coriolis effect alters the flow directions, giving us a surface pattern that is dominated by winds flowing east and west.
Note that the boundary between the Polar Hadley Cell and the Ferrel Cell (often called the Polar Front, and associated with the mid-latitude jet stream) is highly variable, with big loops in it. These loops, or waves, change over time to a much greater extent than the boundaries between the other convection cells.
The Coriolis Effect arises because our planet is spinning, which means that objects near the equator are moving at much faster velocities than objects at higher latitudes. If you were standing on the equator, you would be traveling at about 1600 km/hr; if you were standing at the North Pole, you would be traveling at 0 km/hr. This means that a parcel of air moving across the surface moves into regions where the whole planet is traveling either slower or faster. The physics of this phenomenon are well-understood, and without getting into the mathematics behind it, we can summarize it with 4 simple statements:
Let’s think for a minute about what this general circulation of the atmosphere does. Among other things, it mixes the atmosphere quite thoroughly, and this means that the concentrations of things like greenhouse gases get homogenized. It also means that heat gets transferred. Polar air finds its way toward lower latitudes, where it cools the surface and in so doing warms itself, and warmer air finds its way to higher latitudes, where it gives up its heat to the surroundings and thus cools.
But this general circulation does more — it drives the circulation of the surface waters in the ocean.
The oceans swirl and twirl under the influence of the winds, Coriolis, salinity differences, the edges of the continents, and the shape of the deep ocean floor. We will discuss ocean circulation in detail in Module 6, but since ocean currents are critical agents of heat transport, we must include them here as well. In general, the surface currents of the oceans are driven by winds, Coriolis, and the edges of continents, and the deep currents that mix the oceans are driven by density changes related to temperature and salinity as well as the shape of the deep ocean floor.
The pattern of circulation is shown in the figure below, which represents the average paths of flow; on a shorter term, the flow is dominated by eddies that spin around.
In this map, the different colors correspond to the warm currents (red), cold currents (blue), and currents that move mostly along lines of latitude and thus do not transport waters across a temperature gradient (black). These latter currents may involve warm or cold water, but they do not move that water to warmer or colder places. As mentioned earlier, these arrows depict average flow paths, but on a shorter timescale, the water is involved in eddies that move along the directions indicated by these arrows. These ubiquitous eddies are important since they mix up the surface of the oceans, just as swirling a spoon in a coffee cup mixes the coffee. There are several ways of forming eddies, including intermittent winds combining with the Coriolis effect, opposing currents interacting with each other, and currents interacting with coastlines. As this pattern of currents indicates, surface ocean circulation moves a lot of warm water to colder portions of the Earth; it also moves cold water back down to warmer regions — the net effect is to exchange heat and bring the tropics and the poles a little closer to each other in terms of temperature. Or, in other words, this (along with the winds) moves surplus energy from the tropics to the regions of energy deficit near the poles.
It is important to realize that these currents, by themselves, would eventually homogenize the temperature on the surface, were it not for the huge difference in solar energy between the tropics and the poles. In addition, the strength of these air and ocean currents is sensitive to the temperature difference between the poles and the equator — the greater the temperature difference, the stronger the currents.
The surface currents described above are generally confined to the upper hundred meters or so of the oceans, and considering that the average depth of the oceans is about 4000 meters, the surface currents represent a very small part of the ocean system. The rest of the oceans are also in motion, moving much more slowly under the influence of density differences caused by temperature and salinity changes. Cold, salty water is dense, while warm, fresh water is light, and the resulting density differences drive a system of flows sometimes referred to as the thermohaline circulation. In today’s world, there are two principal places where deep waters form — the North Atlantic and Antarctica, as shown below:
In the North Atlantic, warm, salty water from the Gulf Stream comes into contact with cold Arctic air, and as the water cools it becomes very dense and sinks to the bottom of the ocean — this is called the North Atlantic Deep Water (NADW). When NADW forms, a tremendous amount of heat is transferred from the water to the air; this heat is equivalent to about 30% of the thermal energy received by the whole polar region, so it can influence the Arctic climate in a major way. In the Antarctic, as sea ice forms at the edge of the ice sheet, pure water is removed from seawater, thus increasing the salinity of the remaining water; the resulting density increase makes this the densest water in oceans, and it sinks to the bottom — this water mass is called the Antarctic Bottom Water (ABW). Of these two deep water flows, the NADW is much greater, and it flows in a complex path, hugging the bottom of the ocean as it moves through the Atlantic and into the Indian and Pacific Oceans, by which point it has warmed and mixed with the surrounding water to rise back up into the surface, where it starts its return path back into the North Atlantic, completing the loop in something like a thousand years. This flow is sometimes called the Global Conveyor Belt (we will talk a lot more about this in Module 6), and it represents an important means of mixing the global oceans.
These deep currents are very important to the global climate system in a couple of ways. One of these ways, described above, is the way that NADW formation influences the Arctic climate; this, in turn, can influence the formation or melting of ice in the polar region, which can trigger the ice-albedo feedback mechanism (see below). Another way these deep currents influence the global climate is by transporting CO2 to the deep waters of the oceans. The CO2 is dissolved into the seawater at the surface, so when deep waters form, they bring that CO2 with them, thus removing it from the atmosphere. What this does is to effectively increase the volume of ocean water that can hold CO2, which increases the total mass of carbon the oceans can hold. Indeed, these deep currents are already transporting anthropogenic CO2 and other gases such as CFCs into the deep ocean (we will talk a lot more about this in Modules 5 and 7).
In this activity, we’ll explore some relatively simple aspects of Earth’s climate system, through the use of several STELLA models. STELLA models are simple computer models that are perfect for learning about the dynamics of systems — how systems change over time. The question of how Earth’s climate system changes over time is of huge importance to all of us, and we’ll make progress towards understanding the dynamics of this system through experimentation with these models. In a sense, you could say that we are playing with these models, and watching how they react to changes; these observations will form the basis of a growing understanding of system dynamics that will then help us understand the dynamics of Earth’s real climate system.
It is a computer program containing numbers, equations, and rules that together form a description of how we think a system works — it is a kind of simplified mathematical representation of a part of the real world. Systems, in the world of STELLA, are composed of a few basic parts that can be seen in the diagram below:
A Reservoir is a model component that stores some quantity — thermal energy in this case.
A Flow adds to or subtracts from a Reservoir — it can be thought of as a pipe with a valve attached to it that controls how much material is added or removed in a given period of time. The cloud symbols at the ends of the flows signify that the material or quantity has a limitless source, or sink.
A Connector is an arrow that establishes a link between different model components — it shows how different parts of the model influence each other. The labeled connector, for instance, tells us that the Energy Lost Flow is dependent on the Temperature of the planet.
A Converter is something that does a conversion or adds information to some other part of the model. In this case, Temperature takes the thermal energy stored in the Reservoir and converts it into temperature.
To construct a STELLA model, you first draw the model components and then link them together. Equations and starting conditions are then added (these are hidden from view in the model) and then the timing is set — telling the computer how long to run the model and how frequently to do the calculations needed to figure out the flow and accumulation of quantities the model is keeping track of. When the system is fully constructed, you can essentially press the ‘on’ button, sit back, and watch what happens.
Our first model is slightly more complicated than the diagram shown above because there are quite a few other parameters that determine how much energy is received and emitted and how the temperature of the Earth relates to the amount of thermal energy stored. The complete model is shown below, with three different sectors of the model highlighted in color:
The Energy In sector (yellow above - albedo, solar constant, surf area, and insolation) controls the amount of insolation absorbed by the planet. The Solar Constant converter is a constant, as the name suggests — 1370 Watts/m2. This is then multiplied by the cross-sectional area of the Earth — this is the area that faces the Sun — giving a result in Watts (which you should recall is a measure of energy flow and is equal to Joules per second). This is then multiplied by (1 – albedo) to give the total amount of energy absorbed by our planet. In the form of an equation, this is:
S is the Solar Constant (1370 W/m2), Ax is the cross-sectional area, and a is the albedo (0.3 for Earth as a whole).
The Energy Out sector (blue above - surf area, LW int, LW slope) of the model controls the amount of energy emitted by the Earth in the form of infrared radiation. This is simply described by the Stefan-Boltzmann Law as being the surface area times the emissivity times the Stefan-Boltzmann constant times the temperature raised to the fourth power:
A is the whole surface area of the Earth, e is the emissivity, s is the Stefan-Boltzmann constant, and T is the temperature of the Earth.
The Temperature sector (brown above - water density, ocean depth, heat capacity, temp) of the model establishes the temperature of the Earth’s surface based on the amount of thermal energy stored in the Earth’s surface. In order to figure out the temperature of something given the amount of thermal energy contained in that object, we have to divide that thermal energy by the product of the mass of the object times the heat capacity of the object. Here is how it looks in the form of an equation (with units added):
Here, E is the thermal energy stored in Earth’s surface [Joules], A is the area of the planet [m2], d is the depth of the oceans involved in short-term climate change [m], ρ is the density of sea water [kg/m3] and Cp is the heat capacity of water [Joules/kg°K]. We assume water to be the main material absorbing, storing, and giving off energy in the climate system since most of Earth’s surface is covered by the oceans. The terms in the denominator of the above fraction will all remain constant during the model’s run through time — they are set at the beginning of the model and can be altered from one run to the next. This means that the only reason the temperature changes is because the energy stored changes.
The model has a few other parts to it, including the initial temperature of the Earth, which determines how much thermal energy is stored in the Earth at the beginning of the model run. There are also some converters that divide the energy received and the energy emitted by the surface area of the Earth to give a measure of the intensity of energy flow, of the flux, in terms of Watts/m2, which is a common form for expressing energy flows in climate science.
One unit of time in this model is equal to a year, but the program will actually calculate the energy flows and the temperature every 0.01 years.
Now that you have seen how the model is constructed, let’s explore it by doing some experiments. Here is the link to the model [141].
What will happen to the temperature of the Earth if we run the model for 30 years with the following initial conditions:
Initial Temp = 0°C
Albedo = 0.3 (this will not change over time)
Emissivity = 1.0 (this will not change over time)
Ocean Depth = 100 m (this will not change over time)
Solar Constant = 1370 W/m2
These are the values you see when you first launch the model.
Once you are done answering the questions below, enter your answers into the Module 3 Lab Submission (Practice) to check your answers. If you didn’t do as well as you'd hoped, review the course materials, including the instructional videos, or post questions to the Yammer group to ask for clarification of a particular topic or concept. After that, open the Module 3 Lab Submission (Graded) and complete the graded version of the lab. The graded lab mostly includes questions similar to the practice lab, but has some additional questions.
Download this lab as a Word document: Lab 3: Climate Modeling [142] (Please download required files below.)
Use this Model for Questions 1 - 4. [143] Please Note: The model in the videos below may look slightly different than the model linked here. Both models, however, function the same.
How will changing the initial temperature affect the model? We saw that when we started with an initial temperature (remember that this is the global average temp.) of 0°, the model ended up with a temperature of about -18°C. What will happen if we start with a different initial temperature? Change the initial temperature to 1, then run the model and take note of the ending temperature by placing your cursor over the curve at the right-hand side (where the time is 30 years) and then click and you should see the little box that tells you the position of your cursor. You should round this temperature to the nearest whole number. Select your answer from the following:
A. 10°
B. -8°C
C. -18°C
D. -33°C
Click on the Restore all Devices button when you are done, before going on to the next question.
What will happen to our climate model if we change the albedo? Recall that a low albedo represents a dark-colored planet that absorbs lots of solar energy, while a higher albedo (it can only go up to 1.0) represents a light-colored planet that reflects lots of solar energy. Change the albedo to 0.5, then run the model and find the ending temperature, and select your answer from the following:
A. about -38(plus or minus 1)
B. about 2 (plus or minus 1)
C. about -1 (plus or minus 1)
D. about -16 (plus or minus 1)
Click on the Restore all Devices button when you are done, before going on to the next question.
Next, we will see what happens when we change the emissivity. Recall that if the emissivity is 1.0, the planet has no greenhouse effect and as the emissivity gets smaller, it represents a stronger greenhouse effect — so, how will this change our climate model? Change the emissivity to 0.3, then run the model and find the ending temperature, and select your answer from the following:
A. about -18 (plus or minus 1)
B. about 47 (plus or minus 1)
C. about 16 (plus or minus 1)
D. about 71 (plus or minus 1)
Click on the Restore all Devices button when you are done, before going on to the next question.
The solar constant is not really constant for any length of time. For instance, it was only 70% as bright early in Earth’s history, and it undergoes smaller, more rapid fluctuations (and much smaller) in association with the 11-year sunspot cycle. Let’s see how the temperature of the planet reacts to changes in the solar constant. First, we need to run a “control” version of our model, as is shown in the video above. Set the model up with the following parameters:
Initial Temp = 15°C
Albedo = 0.3
Emissivity = 0.6147 (enter the value manually in the box)
Ocean Depth = 100 m
Solar Constant — alter graph as shown in the video. Note the lines in the Solar Constant graph do not line up exactly with numbers. To get the exact number (1372) click on the Solar Constant Plot, then on Graph and enter the value at X=15 Y-1372.
Record the peak temperature (should be 15.04 deg C) and the time lag (should be 1.7 years).
What we are going to look at now is how the ocean depth affects the way the model responds to this spike in the solar constant. In our control, the ocean depth is 100 m — this means that only the upper 100 m of the oceans are involved in exchanging heat with the atmosphere on a timescale of a few decades. If the oceans were mixing faster, this depth would be greater, and if they were mixing more slowly, the depth would be less. Change the ocean depth to 50 m. Then run the model and note the peak value of the temperature and estimate the lag time, for comparison with the control version. Select your answer from the following:
A. Peak temp > control; lag time > control
B. Peak temp < control; lag time > control
C. Peak temp > control; lag time < control
D. Peak temp < control; lag time < control
Use this Model for Question 5. [144] Please Note: The model in the videos below may look slightly different than the model linked here. Both models, however, function the same.
Now, we’re ready to try something more challenging and more realistic. In the real world, the surface temperature has a big impact on the albedo — when it gets very cold, snow and ice will form and increase the albedo. So, there is a feedback in the system — a temperature change will cause an albedo change, which will cause a temperature change, and so forth. To explore this feedback, we need to work with an altered version of the model [145], where we have defined the relationship between albedo and temperature as follows:
This graph implies that there is a kind of threshold temperature of about -10 to -15°C, at which point the whole planet becomes frozen. The suggestion is that even with a very cold global temperature of 0 °C, the equatorial region might be relatively ice-free and would thus have a low albedo, but as the temperature gets colder, even the tropics become covered by snow and ice. Once that happens, the planetary albedo changes only slightly. Likewise, at higher temperatures, the albedo decreases only slightly since there is so little snow and ice to remove.
This important to understand what this model includes — a link between planetary temperature and planetary albedo. As the temperature changes, so the albedo changes, and as the albedo changes, so the insolation changes, and as the insolation changes, so the temperature changes — this is a feedback mechanism. Feedback mechanisms are very important components of many systems, and our climate system is full of them.
By definition, feedback mechanisms are triggered by a change in a system — if it is in steady state, the feedbacks may not do much. In the above graph, you may notice that at a temperature of 15°C (our steady state temperature), the albedo is 0.3, which is the albedo of our steady state model. So, if we run the model with an initial temperature of 15 °C, and an unchanging solar constant of 1370, our system will be in a steady state and we will not see the consequences of this feedback. But, if we impose a change on the system, things will happen.
The change we will impose involves the greenhouse effect. The model includes something called the CO2 Multiplier. When this has a value of 1, it gives us a CO2 concentration of 380 ppm, which is the default value that gives us a temperature of 15°C. If we change it to 2, we then have 760 ppm and a stronger greenhouse, which leads to warming. If we change it to 0.5, we then have 190 ppm and a weaker greenhouse, thus cooling.
You will be given a value for the CO2 Multiplier; enter that into the model and run it with the Albedo Switch in the off position (see the video) and note the ending temperature. Then turn the Albedo Switch on, which activates the feedback mechanism, and run the model again, noting the ending temperature. The difference between these two temperatures is what you need for your answer. For example, if you set the CO2 Multiplier to 3 and run the model with the Albedo Switch turned off, you see an ending temperature of 18.17°C, and then with the switch turned on, the ending temperature is 24.86°C, so the temperature difference due to the albedo feedback is +6.69°C — this is the answer you would select.
What is the temperature difference due to the albedo feedback? Choose the answer that most closely matches your result. Be sure to study page 3 of the graph pad to get your results.
A. about -5°C
B. about +11°C
C. about +18°C
D. about -20°C
Causes of Climate Change
Use this Model for Questions 6-7. [146] Please Note: The model in the videos below may look slightly different than the model linked here. Both models, however, function the same.
Things that can cause the climate to change are sometimes called climate forcings. It is generally agreed upon that on relatively short time scales like the last 1000 years, there are 4 main forcings — solar variability, volcanic eruptions (whose erupted particles and gases block sunlight), aerosols (tiny particles suspended in the air) from pollution, and greenhouse gases (CO2 is the main one). Solar variability and volcanic eruptions are obviously natural climate forcings, while aerosols and greenhouse gases are anthropogenic, meaning they are related to human activities. The history of these forcings is shown in the figure below.
Volcanoes, by spewing ash and sulfate gases into the atmosphere block sunlight and thus have a cooling effect. This history is based on the human records of eruptions in recent times and ash deposits preserved in ice cores (which we can date because they have annual layers — we count backward from the present) and sediment cores for older times. Note that although the volcanoes have a strong cooling effect, the history consists of very brief events. The solar variability comes from actual measurements in recent times and further back in time, on the abundance of an isotope of Beryllium, whose production in the atmosphere is a function of solar intensity — this isotope falls to the ground and is preserved in ice cores. The greenhouse gas forcing record is based on actual measurements in recent times and ice core records further in the past (the ice contains tiny bubbles that trap samples of the atmosphere from the time the snow fell). The aerosol record is based entirely on historical observations and is 0 earlier in times before we began to burn wood and coal on a large scale.
In this experiment, we will add the history of these forcings over the last 1000 years and see how our climate system responds, comparing the model temperature with the best estimates for what the temperature actually was over that time period. Solar variability, volcanic eruptions, and aerosols all change the Ein or Insolation part of the model, while the greenhouse gas forcing change the Eout part of the model. We can turn the forcings on and off by flicking some switches, and thus get a clear sense of what each of them does and which of them is the most important at various points in time.
We can compare the model temperature history with the reconstructed (also referred to in the model as “observed”) temperature history for this time period, which comes from a combination of thermometer measurements in recent times and temperature proxy data for the earlier part of the history (these are data from tree rings, corals, stalactites, and ice cores, all of which provide an indirect measure of temperature). This observed temperature record, shown in graph #1 on the model, is often referred to as the “hockey stick” because it resembles (to some) a hockey stick with the upward-pointing blade on the right side of the graph.
First, open the model [147]with the forcings built-in, and study the Model Diagram to get a sense of how the forcings are applied to the model. If you run the model with all of the switches in the off position, you will see our familiar steady state model temperature of 15°C over the whole length of time. The model time goes from the year 1000 to 1998 because the forcings are from a paper published in 2000.
Graph #1 plots the model temperature and the observed temperature in °C, graph #2 plots the 4 forcings in terms of W/m2, graph #5 plots the cumulative temperature difference between the model and the observed temperature (it takes the absolute value of the temperature difference at each time step and then adds them up — the lower this number at the end of time, the closer the match between the model and the observed temperatures), and graph #6 shows the same thing, but it begins keeping track of these differences in 1850, so it focuses on the more recent part of the history. Graph #1 gives you a visual comparison of the model and the observed temperatures, while graphs #5 and 6 give you a more quantitative sense of how the model compares with reality.
Before running the model set the ocean depth to 50 m. Run the model 4 times with each of the forcing switches turned on separately (i.e., only one forcing switch turned on for each model run) and evaluate which of the forcings does the best job of matching the shape of the observed temperature curve from 1800 to 1998. Which one provides the best match?
A. GHG
B. Aerosols
C. Volcanoes
D. Solar
Before running the model, set the ocean depth to 150 m. Run the model 3 times — once with only the natural forcing switches turned, once with only the anthropogenic forcings turned on, and once with all of them turned on. Which combination does the best job of matching the shape of the observed temperature curve from 1800 to 1998?
A. natural forcings
B. anthropogenic forcings
C. all forcings
D. natural and anthropogenic forcings are about the same.
You should have read the contents of this module carefully, completed and submitted any labs, the Yellowdig Entry and Reply, and taken the Module Quiz. If you have not done so already, please do so before moving on to the next module. Incomplete assignments will negatively impact your final grade.
On the evening weather report on October 15th, 1987, weatherman Michael Fish uttered the now infamous words "Earlier on today a woman rang the BBC to say she'd heard there was a hurricane on the way. Well, if you're watching: don't worry, there isn't." As it happens, an extra-tropical hurricane swept across the British Isles that night with wind speeds up to 100 mph leading to 18 deaths, $1.6 billion in damage and the loss of 15 million trees. And Michael Fish now is a weather legend for all the wrong reasons. Up to the latest part of the 20th century, British weather forecasters, in general, had a miserable reputation. If forecasters back then said rain, it was just as likely to be sunny and no one took much notice. However, since 1987, weather forecasting has become much more of a science than an art, with highly sophisticated models operated by powerful computers making weather predictions. You can rely on weather forecasts to be accurate in the most part. For example, extreme weather forecasts for events like hurricanes, tornadoes and snow storms are generally accurate as a result of these improved models. Some of these same models are now used to make projections about climate change. In this module, you will learn how models work and what predictions they are giving for the future.
We have already learned about very simple climate models that represent the whole earth in one box and slightly more advanced models that represent the Earth in a few latitudinal bands. As you might imagine, there is a whole spectrum of models, and at the far end in terms of complexity are GCMs — which can mean either General Circulation Model or Global Climate Model. There are GCMs that model just the atmosphere (AGCMs), just the oceans (OGCMs) and those that include both (AOGCMs). These models divide the Earth up into a big 3-D grid and then treat each little cube or cell similar to the way we treat reservoirs in STELLA models. The basic structure of a GCM can be seen in the figure below:
As you can see, the models include land, air, and ocean domains, and each of these domains is treated somewhat separately since different processes act within the various domains. The more cells in a model, the closer it can approximate the real Earth, but too many cells would require more computing power than is available. The history of these models is closely connected to the history of advances in computing power, and the current generation of high-end GCMs are among the most computationally-intensive programs in existence. Models are in a continuing state of development and evolution, so in the future, they will be more complex and realistic; with continued advances in computational power and reduction in the cost of runs, models are set to take on more ambitious tasks such as making very fine projections about an ever-expanding number of environmental variables. Combine them with robots and look out!
What’s so important about these models that people would devote their careers to building and refining programs that take days to run on the fastest computers? The power and utility of these models is that they can show us how climate changes on a regional scale, which is of utmost importance in planning for our future. In our future, we are probably going to be the most effective in dealing with climate change on the scale of regions like states and countries, so having a model that shows us what those regional changes are likely to be is a very important tool.
A key aspect of using models is understanding the uncertainty of their predictions. They are simulations, after all. As you will see, most of the results are cast in terms of ranges, for instance, the temperature predictions for 2100 under the different emission scenarios are given with significant bands of uncertainty. The reading for this module is the Summary of the Intergovernmental Panel on Climate Change for Policy Makers. When you read this, you will see how scientists convey the uncertainty of model predictions.
On completing this module, students are expected to be able to:
Below is an overview of your assignments for this module. The list is intended to prepare you for the module and help you to plan your time.
Action | Assignment | Location |
---|---|---|
To Do |
|
|
Ever wonder how weather forecasts are so accurate? How predictions are made over days and weeks? How hurricanes and blizzards are forecasted? General circulation models (GCMs) are instrumental in weather forecasting. They are highly detailed grid-based simulations of weather that use atmospheric physics to predict events over hours, days, and even further into the future. These models are commonly used to predict climate change over years, decades, and centuries. GCMs have become more and more accurate as the physics of the atmosphere has become better understood. As computers have become more capable computationally, the models have become more accessible to the general public. Before, they required a mainframe computer. You can now run them on laptops! In this module, we explore how GCMs work.
In a highly simplified sense, the operation of GCMs can be thought of in a few basic steps.
This figure comes from a run of the NCAR model, called CCM (Community Climate Model) and represents the atmospheric pressure at sea level averaged over 10 years. Also shown is the pattern of winds that results from the combination of the forces due to the pressure differences (air flows from high to low pressure, driven by a Pressure Gradient Force and the Coriolis Force, which is related to the rotation of the Earth. The length of the arrows is proportional to the strength of the winds. Note that the model produces belts of pressure that are very similar to the observed pressure belts — low pressure near the equator, high pressure at 30N and 30S, low pressure at 50-60N and 50-60S, and high pressure again near the poles, patterns we learned about in Module 3.
This is obviously a very important question — if we are to rely on these models to guide our decisions about the future, we need to have some confidence that the models are good. The most important approach is to see if the model can simulate the known climate history. We set the model up to represent the state of the climate at some point in the past — say 1900 — and then we see how well the model can reproduce what actually happened.
As you can see in the figure below, the models are collectively quite good:
In this figure, the black line is the instrumental global average temperature (as an anomaly, which is a departure from the mean value from 1901 to 1950), the yellow lines represent the output from 58 model runs by 14 different models, and the red is the average of those 58 runs. The vertical gray lines are times of major volcanic eruptions, which are always followed by a few years of cooler temperatures.
Now, let’s look at how closely the models can simulate the spatial pattern of temperatures over the Earth. To begin with, we’ll look at January temperatures for the time between 2003 and 2005 — here is what we can reconstruct from observations (which are better in some places than others — we know the Sea Surface Temperature (SST) much better than the land temperature, since SST is very precisely measured by satellites).
Now, we look at the same time period from a model simulation.
You can see that in general, they are quite similar to each other, but we can gain a bit more insight into the relationship between the observations and the model by subtracting the model from the observations — the result is this:
Here, you can see that the model and the observations are generally quite close, within a couple of degrees of zero, where zero would be a perfect match. Areas that are yellow to orange are regions where the actual temperature is greater than the modeled temperature; blue areas are regions where the actual temperature is lower than the model. It would be a challenging task to figure out the cause of these differences, but at a very fundamental level, it is related to the fact that things like clouds are very important to the climate system, and the processes that actually form clouds occur on such a small scale that the models cannot resolve them. Cloud formation is one example of what the modelers call a sub-grid process, and modelers have to devise clever ways of getting around this. This is an area where refinements continue to occur, but for the time being, we see that the models do an impressive job, but not a perfect job. As you can see in the above results, models tend to underestimate the temperatures on land, so we should consider model results for the future to also underestimate the true temperatures.
If we average over longer time periods and also average many model results together, we get what climate modelers refer to as an ensemble mean, and these ensemble means do a remarkably good job of matching the observations, as is shown below.
In this figure, the observed annual mean temperatures for the time period 1961-1990 are represented by the black contour lines, labeled in °C (-56°C in Antarctica and about 24°C in equatorial Africa). The colors represent the model temperatures (from 14 models, for the same 1961-1990 time period) minus the observations; positive values mean the models estimate temperatures that are too high. On the whole, the models slightly underestimate the temperatures, and they have particular problems at very high latitudes and in areas that are topographically complex.
As mentioned above, one of the important aspects of GCMs is that they calculate the precipitation, and this provides another means of evaluating how good the models are. The figure below shows a comparison of the observed annual precipitation and the average of the 14 models used by the IPCC study.
As can be seen, the models, on average, do quite well at simulating the global pattern of precipitation.
AOGCMs calculate the circulation and temperature of the world’s oceans, and we compare their ability in that regard by comparing the model-generated temperatures averaged over 1957 to 1990 with the observations from that time period.
In this figure, we see the latitude-averaged observed ocean temperature from 1957 to 1990 in the contours, and the average model ocean temperatures minus the observed temperatures in colors. For most of the oceans, the models are ± 1°C from the observations.
In sum, it should be fairly clear that although these models are incredibly complicated, they do a fairly good job of reproducing the temporal and spatial characteristics of our climate, especially when we look at longer time averages. In other words, if we compare the model results for a given day with the actual observations of that day, the agreement is not very good; the agreement gets better on a monthly-averaged basis, and it gets even better on an annually-averaged basis. Today, models are in a constant state of improvement, with certain elements, such as the impact of clouds, needing a lot more understanding.
Before we look at the results of GCMs that run into the future, we have to understand a few things about the experimental setups that go into these models. In order to do these experiments, the modelers have to apply forcings just as we did with our simple climate model in Module 3, and the primary variable is CO2 (carbon emissions) added to the atmosphere through human activities such as burning fossil fuels, farming, making cement, etc.
The IPCC (Intergovernmental Panel on Climate Change) has developed a whole set of scenarios (about 40) that represent the possible carbon emissions history for the next 100 years. The carbon emission is used as a key variable in driving climate modeling for each scenario. These scenarios are known as Representative Concentration Pathways (RCPs) and each one is based on an emissions trajectory. The IPCC has focused on several individual RCPs that provide a range of emissions and climate scenarios. At the outset for simplicity in this module we do not use the RCP emissions terminology because it is not very intuitive. Instead, we use groups of RCPs known as families (also developed by the IPCC) defined by the severity of the cuts (or lack thereof!) and the amount of cooperation between countries. Each family or scenario is based on a bunch of assumptions about population growth, economic growth, and choices we might make regarding steps to minimize carbon emissions. You can read more about the whole set of emissions scenarios at Wikipedia: Special Report on Emissions Scenarios [149]. But for our purposes, we'll focus on 3 families representing very different emissions scenarios.
The first of these scenarios is called SRES A2, and it is commonly known as business-as-usual — in other words, it leads to a continuation of increased annual carbon emissions that follows the recent history. In effect, this scenario represents a somewhat divided world, one in which we just can't reach agreements on what to do about limiting emissions of CO2, so each country does what seems to be in its own best interests. This world is characterized by independently operating, self-reliant nations. This world also includes a continuously increasing population — it does not level off during this time period. Above all, in this world, decisions are based primarily on perceived economic interests, and the assumption is made that these interests do not include the development of alternative energy sources.
The second scenario, called SRES A1B is a bit more optimistic. This scenario envisions an integrated world characterized by rapid economic growth, a population that reaches 9 billion by 2050 and then declines gradually, and the rapid development of alternative energy sources that facilitate increased economic growth while limiting and eventually reducing carbon emissions. This scenario also assumes that there will be rapid development and sharing of technologies that help us reduce our energy consumption. One of the keys to this scenario is that countries are integrated — they act together and find ways to improve the conditions for everyone on Earth. At this point in time, A1B is an optimistic but realistic scenario; B1 would take a revolution in the way the world economies work. And A2 or "business as usual," well, we will show you this is not the road we want to travel!
The third scenario, called SRES B1, represents an even more integrated, more ecologically friendly world, but one in which there is still steady and strong economic growth. As in scenario SRES A1B, the population in this scenario peaks at 9 billion in 2050 and then declines. One way to think of this scenario is that it represents a rapid, strong, and global commitment to the reduction of carbon emissions — it represents the best we could possibly do, and yet it does not rely on miracle technologies. The only real miracle it requires is that we all quickly figure out how to think and act globally and not focus solely on our own national interests. Most projections ignore scenario B2, as the combination of regional and environmental strategies is highly unlikely.
In graphical form, here are the three emissions scenarios.
Each scenario shows emissions of carbon to the atmosphere (mainly from fossil fuel burning — FFB) in units of Gigatons of carbon per year (Gt C/yr; a GT is a billion tons!), so this is an annual rate. In terms of a STELLA model (which we will return to in the next Module), this represents a flow into the atmosphere. Roughly half of the carbon emitted will remain in the atmosphere and lead to a stronger greenhouse effect, which will, in turn, increase global temperature and change the climate in a variety of ways.
Next, we'll have a look at the main driver for emissions reduction, the Paris Climate Agreement and then frame the model scenarios in terms of their implications for climate change and climate policy.
The Paris Accord is a really big deal. This global climate agreement brokered by the United Nations and signed in December 2015 by some 195 countries went into effect in October 2016. A key goal of the accord is for all member countries to reduce greenhouse gas emissions to keep global temperature rise below 2o C of pre-industrial levels (measured in 1880). This is a threshold level above which most scientists agree that the impacts of climate change will be catastrophic, including flooding of large coastal cities by sea level rise, brutal heat waves, and droughts that could cause widespread starvation in developing countries. The agreement also lays the groundwork for countries to strive to reduce emissions to keep temperature increase below 1.5o C or 2o C. 1.5o C is considered to be the best case scenario given that we are currently at 1.1o C above 1880 levels, and it would need very drastic emissions cuts very quickly. Above 1.5o C low-lying island nations in the Pacific and Indian Oceans would likely end up underwater. The agreement also acknowledges that 2o C is a more likely warming target. As we've seen before, this is a significant amount of warming but much more favorable than the 3 or 4o C which would produce catastrophic climate effects.
There have been numerous previous climate agreements in the past, most notably the 1997 Kyoto Protocol that laid out stringent emissions targets for the countries that signed on. One of the reasons the Paris Accord was adopted by so many nations (only Syria and Nicaragua did not sign initially, but both now have) is because the impacts of climate change are becoming increasingly urgent. Although 127 countries signed on to Kyoto, the US did not.
The second reason the accord was so widely adopted is that the emission targets are voluntary, set by the individual countries based on what they believe is feasible. For example, the US’s goal was to reduce emissions by 26 percent by 2025. Once a country sets its target, it is required to abide by it and present supporting monitoring data. A country can change targets every five years. The downside of the flexibility is that many experts believe that with current targets, 1.5o C is impossible, 2o C is highly unlikely and 3o C is more realistic. So, countries will have to reduce emissions radically and rapidly to stave off the highly adverse impacts of climate change.
Two other key provisions of the Paris Accord is that richer developed countries have made a financial commitment to help poorer developing nations meet their targets, although there were no firm amounts in the agreement. President Obama pledged $2.5 billion to this fund while he was in office, but only $500 million was paid. Another key provision is that the agreement recognizes and addresses deforestation as a key element of emission reduction and for countries to use forest management strategies as part of their emissions goals. In fact at the climate summit in 2021, 100 countries including Brazil where deforestation has been particularly devastating, agreed to stop deforestation entirely by 2030, which would be a major step forward.
For the US, one of the key components of the emissions reduction strategies was the Clean Power Plan, introduced in 2014 under President Obama. The CPP set out to reduce emissions from electrical power generation by 32% based on the reduction of emissions from coal-fired power plants and the conversion to renewable sources of energy including wind, solar, and geothermal.
So, the strength of the Paris accord is that it is voluntary, highly transparent and collaborative. The downside is that it is voluntary! The general fear is that the agreement may not go far enough, fast enough.
However, that said, this is by far the most widely adopted agreement and the global scope is a massive accomplishment. So, it was a major disappointment on June 1st, 2017 when President Donald Trump signaled that the US would withdraw from the Paris Accord in 2020, the earliest time the US could pull out under the agreement guidelines. The fear is that if the US, the second largest producer of carbon, does not abide by its emissions goals, other countries might not as well. Trump's reasons were largely economic, that the conversion to renewable energy would be too expensive and hinder the bottom line of businesses, especially the fossil fuel industries. However, as we will see in this class, conversion to renewables has begun and has the potential to be a very large and highly profitable business. Moreover, cities, states, and businesses themselves, especially those in the northwest and on the west coast are already committed to reducing emissions, so at least part of the US will continue to collaborate with other countries to address the critical issue of climate change.
Trump's overall strategy involves defunding and repealing the Clean Power Plan, which happened in 2019. It was replaced by the Affordable Clean Energy Rule, which was invalidated by the courts in 2021.
Fortunately for climate, at least, President Biden reentered the Paris Agreement on 19 February 2021 and has set even more ambitious US emissions targets than those originally agreed upon. The Inflation Reduction Act passed in 2022 includes $369 billion to help individuals, communities, and industries switch to renewable energy. This will help the US meet its Paris targets, and possibly more importantly, the legislation shows important leadership on the international stage.
We will refer to the Paris Agreement in the remainder of the course. In this module, we discuss how models simulate the climate of the future. In closing, remember the 2oC number, it's going to come up over and over again.
In this section, we explore GCM predictions for future changes in temperature, precipitation, and surface water using three of the emission scenarios, A2, A1B, and B1. The models are driven primarily by estimates of CO2 input over the coming decades in each scenario.
In this section, we explore the predictions from GCMs regarding the temperature under different IPCC emissions scenarios described in the previous section. As part of the 2022 IPCC Assessment Report, all of the major GCM modeling groups around the world ran their models with the same Representative Concentration Pathways (RCPs) and emissions scenario families (A2, A1B, and B1) in order to provide the best estimate of what the future climate might look like under these scenarios. Each model is different, and so their results are also different. The figure below gives a sense of how much variability and similarity there is in these models. Remember that climate scientists believe that it is key that we maintain the warming below 2oC; above that level the consequences, including drought, heatwaves, melting of ice sheets, could be dire.
Each of the thin colored lines represents the output from a different GCM — here we see the average global temperature through time, starting in 1900 and going until 2099, using the SRES A2 scenario, which is the one we sometimes call "business as usual." There is obviously a big spread in the results, but they all have more or less the same general form and a similar range (the difference between the minimum and maximum temperatures). The thicker gray line is the mean from all these models, and we can see that by the end of the century, the mean rises by about 3.2 °C above the present temperature — this is roughly three times the warming we have experienced in the last century (about 1.1oC).
The differences in these curves reflect, among other things, different starting conditions, and if we force them to all have the same temperature today, the similarities are more apparent:
This view gives a sense of how much the models differ in their relative temperature changes over the next century. You can see that by the end of the century, the spread of temperatures is a bit less than 2°C, but most of the models fall within a half a degree from the mean temperature — the thick gray line. As we have discussed, this mean warming of 3.2 °C would be disastrous for the planet.
The lesson here is that the similarities in the models are far more important than their differences, and that they forecast a significant temperature rise by the end of the century as we essentially continue with our emissions of carbon into the atmosphere.
Next, let’s have a look at what the models say about the different emissions scenarios. Just to refresh your memories, here are those emissions scenarios again:
Recall that in these emissions scenarios, we talk about the annual rate of carbon emissions to the atmosphere and that carbon takes the form of CO2 in the atmosphere.
Here are the corresponding global temperature histories from the collection of models for these different scenarios:
As expected, the A2 scenario results in the highest temperature rise, followed by the A1B and the B1 scenarios. Remember that the A1B scenario represents a pretty optimistic view of how we will react to the challenge of climate change, but even in that case, the temperature rises by about 2.3 °C in the next century, which is above the Paris 2.0oC target. And even in the dramatic reduction in emissions envisioned in the B1 scenario, the temperature still rises by about 1.4°C — greater than what we’ve experienced in the last century. The lesson here is that we need to be prepared for continued climate change even if we take steps to limit carbon emissions into the atmosphere. And note that only B1 keeps us below the catastrophic 2.0oC threshold discussed earlier.
Next, we turn to the really interesting aspect of the GCM results, which are the spatial patterns of climate change. Why is this so interesting and important? The reason is that what really matters to us is how the climate changes in key areas -- areas that will affect sea level through melting of glacial ice, areas where people are concentrated, and areas where we produce food to feed ourselves. Remember that the climate of the Earth is highly variable, and if we talk about a global temperature rise of 3.2 °C, we have to remember that the temperature will rise more than that in some places (such as the polar regions) and less than that in the tropics.
We begin with a look at the climate of the future as predicted by the NCAR (National Center for Atmospheric Research in Boulder, Colorado) model — this model seems to often fall close to the mean of all the other models.
Here, we see 4 views of the surface temperature anomaly relative to the 1960-1990 mean. The upper 2 panels represent the mean temperature anomaly for the 20-year period from 2046 to 2065 in the month of July on the left and January on the right. In the lower 2 panels, we see similar views for the 20-year period from 2080 to 2099. The color scale below is for all of the panels and shows the anomaly in °K, but you can also think of this as °C.
Let’s start with the mid-century forecast. It calls for moderate warming in the range of 2 °C relative to the 1960-1990 mean. For both months, the warming is slightly higher on land than in the oceans, and there are a couple of spots of cooling: at the southern edge of the Sahara, one in the Southern Ocean around Antarctica, and one in the North Atlantic. The striking feature of this model result is the big change in the high latitudes of the Northern Hemisphere in January, where a large region will experience much warmer winter temperatures — up to 10° C warmer. The changes are even more dramatic for the end of the century, with the northern winters warming by up to 20° C above the 1960-1990 mean! This clearly spells the end of polar ice, which already is at its lowest extent ever. Notice also that much of Canada and Siberia warm by up to 10° C; this has important implications for the reduction in permafrost, which will increase the flow of carbon into the atmosphere via positive feedback (more on this in the next module).
The pattern of warm winters is important for a number of reasons. For one thing, it means that there will be less growth of ice in glaciers during the time of the year that they accumulate ice; thus they will shrink faster, and sea level will rise at a higher rate. Another unexpected result of warmer winters is increased problems from some insect pests whose populations normally are greatly reduced by cold winters, and when it does not get cold enough in the winter, they expand their range and cause greater damage. This is already happening in the western US with the pine bark beetle, whose population has exploded in recent years, leading to the decimation of large forested areas. These dead pine trees are then fuel for large forest fires whose scale exceeds fires in the historical record.
On the other hand, warmer winters lead to lower energy demands for heating, but this is offset by the greater energy demands for cooling during the summer months (the season where cooling would be required will also increase).
For comparison, we now compare the end-of-century forecast for the other 2 scenarios, A2, which is our business as usual case, and B1, which is our optimistic case with the A1B forecast.
First, we have the A2 scenario, which leads to the highest warming:
For July, this scenario leads to warming of the continents that is between 5 and 10° C — very little of the US would escape warming in excess of about 6° C; the same is true for much of Europe. Winter shows an even more dramatic warming, with vast regions warming by more than 20 °C. A2 would result in a major increase in the number of days over 90o F in places such as Atlanta and Austin, literally half of the days of the year would exceed that level of discomfort. And polar ice would melt much faster than in A1B or B1.
Now, for the other extreme, the B1 scenario at the end of the century:
As expected, the warming is far less dramatic, but still shows an impressive warming at the high latitudes of the Northern Hemisphere in the winter months. But, for most of the land areas, where people are concentrated and where we grow a lot of our food, the climate changes generally do not exceed a warming of 2°C.
These comparisons demonstrate the importance of reducing emissions to scenario B1 levels.
Next, we turn to precipitation, and we will be looking at anomaly maps, showing the difference between the model’s precipitation rate (i.e., mm/month or mm/day) and the average precipitation rate for 1960-1990. It will help us to begin with a glimpse of actual typical precipitation rates for the two months of interest here — January and July. Below, we see this pair of maps:
Here, red is used to show high rainfall, and blue is used to show low rainfall. The precipitation rates here are shown in terms of millimeters of rain per day (the high value here of 15 mm/day is about 0.6 inches/day). In the eastern US, for instance, the July rainfall is about 1 mm/day.
Looking into the future, we will again utilize the results from the NCAR model, looking at 20-yr. means from the climate model results to get a smoothed out version of the results. We focus here on the SRES A1B scenario, looking at the months of July and January.
The units here are a bit odd — kilograms per meter squared per second — but it is still a rate, and if we do a conversion, we find that 0.0001 of these units equals 25 centimeters per month, or a bit less than one centimeter per day, which is 10 millimeters per day. Have a look at the map for July 2040 to 2069. In the eastern US, the precipitation anomaly is about 2e-5 kg/m2s. At first, it is difficult to get a sense of whether this is important. This anomaly translates to about 45 mm/month, or about 1.5 mm/day. From the July 2001-December 2019 map above, we see that the typical rate for July in this region is on the order of about 4-5 mm/day, so an increase of 1.5 mm/day is about a 30% increase — fairly significant.
Interestingly, there is not much of a change in the anomaly for this region (eastern US) as we move to the end-of-the-century map in the lower left panel above. And in both cases, the western US is drier by a bit. If we look at the January maps, we see a bigger change between the two time periods, getting drier by the 2080-2099 period. For January, it is important to note that in the Western US, the precipitation is decreased; this could mean problems for the water supply out west, where the winter snows, as they melt in the summer, represent a major part of the water budget.
What about the other scenarios? Below, we see the precipitation anomaly maps for the A2 scenario — the one that leads to a hotter climate — for the 2080-2099 time period.
Compared to the A1B scenario for the same time period, we see a generally drier picture for the US, though the differences are actually quite small. Note that in this scenario, as in others, the tropics get wetter.
For scenario SRES B1, the one where the temperature at the end of the century is only slightly higher than the present, the precipitation changes are generally quite small, as can be seen in the map below:
Here, the very light colors indicate slight increases and decreases in the precipitation rate for July — the same picture holds for January as well.
In summary, the precipitation results indicate that in general, there are very complicated patterns of precipitation change, and for much of the globe, they are quite minimal. The model predicts that there will be wetter areas and drier areas, and what really matters is how these changes correlate with the regions where people live and grow their food. Under the A2 scenario (business-as-usual), we should expect a generally drier western and south-central US and a generally wetter northeastern U.S. We will revisit this issue in Module 8.
As was mentioned in the previous section on precipitation predictions from GCMs, there are some important implications for water supplies. In this section, we will take a look at how the model results might impact surface water in the future.
Surface water is of great importance since it is the primary source of water for agriculture. It is estimated that 69% of worldwide water use is for irrigation, and of this, about 62% comes from surface water which includes streams, lakes, and reservoirs.
When rain falls on the surface, most of it is absorbed by the soil, and then from there, it slowly migrates to streams, and from there into lakes and reservoirs, but it ultimately leaves a region, through stream flow and evaporation, to return to the oceans or the atmosphere — this is just a part of the large global water cycle. The total amount of water flowing in the streams of a region provides a useful measure of how much water is available for agriculture. We'll discuss this in depth in Module 9.
It takes around 3,000 liters of water to produce enough food to satisfy one person's daily dietary needs. This is a considerable amount when compared to that required for drinking, which is just 2-5 liters. To produce food for over 7 billion people who inhabit the planet today requires the water that would fill a canal ten meters deep, 100 meters wide and 7.1 million kilometers long – that's enough to circle the globe 180 times!
As we have seen, a GCM will predict the amount of rainfall over the surface of the Earth, and if we combine that with a model of the topography of the land areas (which is included in the GCMs), we can figure out how much water will flow as surface water through different regions. This has been done by taking the average precipitation from 20 GCMs operating under the SRES A1B scenario and then calculating how that surface water flow compares to the long-term average from 1900 to 1970. The resulting data provide us with a very good idea of what to expect in the future if we follow the A1B scenario.
The results of the surface water predictions can be seen here, but we will focus in on 3 snapshots from this history in a series of maps. We begin with a view of the predictions for the year 2020.
This image is a 30-year average, centered on the year 2020. The blue areas will see an increase in streamflow, and the red areas will see a decrease. The map includes contour lines that separate the values according to the labeled tick marks on the map scale. As we might expect, the changes are relatively small at this point, but the Mediterranean region has noticeably less streamflow.
As we move forward in time, to 2050, the changes become more dramatic.
And as we continue into the future, the picture in 2080 looks like this:
By the year 2080, we see that there are some fairly stark differences in streamflow. A large swath around the Mediterranean, including much of Europe, North Africa, and the Middle East all will have significant reductions in streamflow, which will add stress to an agricultural system that is already operating at close to its limit. Consider also that the world by this time will certainly have a minimum of 10 billion people. Much of the Southwestern US (including California) and Central America will also experience a reduction in streamflow. This is clearly bad news for a place like California, which is already going to great lengths to bring surface water to its cities and major agricultural production areas via a system of canals. And for Central America, continuing drought will lead to even more mass migration to the US southern border. We will discuss this in much more detail in Module 9.
On the other hand, consider where most of the Earth's population is — China and India. Both of these regions, according to the model results, should experience an increase in surface water availability, which is good news.
Another big trend is that the high latitudes of the Northern Hemisphere experience a very large increase in surface runoff. This could be important if population patterns shift northward in a warming world.
We now consider how these changes add up over the whole globe — is surface runoff increasing or decreasing as we go into the future? The 3 maps shown above have been averaged along lines of latitude to give a simpler sense of the change, and then these latitudinal averages are summed and weighted according to the different areas each latitudinal band represents to give a global sum.
As can be seen, the tropics and the high latitude regions tend to get wetter through time, while the mid-latitudes tend to become drier, and on a global scale, there is slightly more surface water runoff as we move into the future — though a 3.3% change is not too large. Nevertheless, remember the map pattern of the change — this is the more important aspect of the model data.
So, in summary, as with precipitation, the future seems to hold a mixture of more and less surface water runoff, and this will have some important implications for where we will produce our food in the future and which places may be better suited for human habitation in the future. We will discuss the implications of the projections for regional drought in places such as the south-central US and Australia in more detail in Modules 9.
In this module, you should have grasped the following concepts:
You should have read the contents of this module carefully, completed and submitted any labs, the Yellowdig Entry and Reply, and taken the Module Quiz. If you have not done so already, please do so before moving on to the next module. Incomplete assignments will negatively impact your final grade.
In summer 2019, the Brazilian Amazon rainforest was on fire. By far the largest rainforest on Earth, the Amazon is often called the lungs of the planet for its role in replacing carbon dioxide with oxygen. The fires are a double whammy: The fires are emitting large quantities of CO2 warming the planet, at the same time the removal of forest reduces the long term ability of this vital “scrubber” of CO2 which will lead to warming in the future.
Only ten years ago, there was a global effort to preserve the rainforest, home to a tenth of the world's species, and a vital part of the climate system due to its role in sequestering CO2. This changed with the election of President Jair Bolsonaro in 2019, and his government is now turning a blind eye to logging in the forest combined with burning to make new agricultural land, and the clearing has been focused in indigenous territory.
There were 39,000 wildfires in the Amazon from January through August 2019, a 77% increase over the same time in 2018. Smoke filled the air in cities and the is clearly visible on satellite images. The impact won’t be felt for a while, but the scale and significance of the Amazon in the carbon cycle are sure for there to be a serious cost in the future of atmospheric CO2 and warming. Now the policies will change with the election of President Lula da Silva in 2022 who has vowed to protect the Amazon.
Carbon is unquestionably one of the most important elements on Earth. It is the principal building block for the organic compounds that make up life. Carbon's electron structure gives it a plus 4 charge, which means that it can readily form bonds with itself, leading to a great diversity in the chemical compounds that can be formed around carbon; hence the diversity and complexity of life. Carbon occurs in many other forms and places on Earth; it is a major constituent of limestones, occurring as calcium carbonate; it is dissolved in ocean water and fresh water; and it is present in the atmosphere as carbon dioxide, the second most voluminous greenhouse gas and the trigger for the bulk of current global climate change.
The flow of carbon throughout the biosphere, atmosphere, hydrosphere, and geosphere is one of the most complex, interesting, and important of the global cycles. More than any other global cycle, the carbon cycle challenges us to draw together information from biology, chemistry, oceanography, and geology in order to understand how it works and what causes it to change. The major reservoirs for carbon and the processes that move carbon from reservoir to reservoir are shown in the figure below. You do not need to understand this figure yet, but just appreciate that there are many reservoirs and a lot of exchanges. The carbon cycle is anything but simple! We will discuss these processes in more detail, and then, we will construct and experiment with various renditions of the carbon cycle, but first, we will explore some of the history of carbon cycle studies.
The global carbon cycle is currently the topic of great interest because of its importance in the global climate system, and also because human activities are altering the carbon cycle to a significant degree. The potential effects of human activities on the carbon cycle and the implications for climate change were first noticed and studied by the Nobel Prize-winning Swedish chemist, Svante Arrhenius, in 1896. He realized that CO2 in the atmosphere was an important greenhouse gas and that it was a by-product of burning fossil fuels (coal, gas, oil). He even calculated that a doubling of CO2 in the atmosphere would lead to a temperature rise of 4-5°C -- amazingly close to the current estimates obtained with global, 3-D climate models that run on supercomputers. This early recognition of human perturbations to the carbon cycle and the climatic implications did not raise many eyebrows at the time, but humans' "experiment" inputting massive amounts of CO2 to the atmosphere was just beginning then. We will be referring to this "experiment" throughout the module.
On completing this module, students are expected to be able to:
After completing this module, students should be able to answer the following questions:
Below is an overview of your assignments for this module. The list is intended to prepare you for the module and help you to plan your time.
Action | Assignment |
---|---|
To Do |
|
In the late 1950s, Roger Revelle, an American oceanographer based at the Scripps Institution of Oceanography in La Jolla, California began to ring the alarm bells over the amount of CO2 being emitted into the atmosphere. Revelle was very concerned about the greenhouse effect from this emission and was cautious because the carbon cycle was not then well understood. So, he decided that it would be wise to begin monitoring atmospheric concentrations of CO2. In the late 1950s, Revelle and a colleague, Charles Keeling, began monitoring atmospheric CO2 at an observatory on Mauna Loa, on the big island of Hawaii. Mauna Loa was chosen because its elevation and location away from industrial centers made it as close to a global signal as any other location. The record from Mauna Loa, one of the most classic plots in all of science, shown in the figure below, is a dramatic sign of global change that captured the attention of the whole world because it shows that this "experiment" we are conducting is apparently having a significant effect on the global carbon cycle. The climatological consequences of this change are potentially of great importance to the future of the global population. The CO2 concentration recently crossed the 400 ppm mark for the first time in millions of years! In 2022, the yearly average was 417 ppm (check that number with the curve below!).
As the Mauna Loa record and others like it from around the world accumulated, a diverse group of scientists began to appreciate Revelle's concern that we really did not know too much about the global carbon cycle that ultimately regulates how much of our CO2 emissions stay in the atmosphere.
The importance of present-day changes in the carbon cycle, and the potential implications for climate change became much more apparent when scientists began to get results from studies of gas bubbles trapped in glacial ice. As we learned in Module 1, the bubbles are effectively samples of ancient atmospheres, and we can measure the concentration of CO2 and other trace gases like methane in these bubbles, and then by counting the annual layers preserved in glacial ice, we can date these atmospheric samples, providing a record of how CO2 changed over time in the past. The figure below shows the results of some of the ice core studies relevant for the recent past -- back to the year 900 A.D.
The striking feature of these data is that there is an exponential rise in atmospheric CO2 (and methane, another greenhouse gas) that connects with the more recent Mauna Loa record to produce a rather frightening trend. Also shown in the above figure is the record of fossil fuel emissions from around the world, which show a very similar exponential trend. Notice that these two data sets show an exponential rise that seems to begin at about the same time. What does this mean? Does it mean that there is a cause-and-effect relationship between emissions of CO2 and atmospheric CO2 levels? Although we should remember that science cannot prove things to be true beyond all doubt, it is highly likely that there is a cause-and-effect relationship -- it would be an extremely bizarre coincidence if the observed rise in atmospheric CO2 and the emissions of CO2 were unrelated.
How serious is our modification of the natural carbon cycle? Here, we need a slightly longer perspective from which to view our recent changes, so we return to the records from ice cores and look deeper and further back in time than we did in the figure we have been examining.
In addition to providing a record of the past concentration of CO2 in the atmosphere, as we learned in Module 1, the ice cores also give us a temperature record. By studying the ratios of stable isotopes of oxygen that make up the glacial ice, we can estimate the temperature (in the region of the ice) at the time the snow fell (glacial ice is formed by the compression of snow as it gets buried to greater and greater depths). From these data, shown in the figure below, we can see the natural variations in atmospheric CO2 and temperature that have occurred over the past 160,000 years (160 kyr).
In fact, looking at this much longer span of time enables us to clearly see that the present CO2 concentration of the atmosphere is unprecedented in the last several hundreds of thousands of years. As geoscientists, we are interested in more than just the last few hundred kiloyears, and so we look back into the past using sediment cores retrieved from the deep sea. Geochemists studying these sediments have been able to reconstruct the approximate concentration of CO2 in the atmosphere and the sea surface temperature (SST).
To find atmospheric CO2 levels equivalent to the present, we have to go back 2.5 million years. This means that, to the extent that the state of the carbon cycle is closely linked to the condition of the global climate, we are pushing the system toward a climate that has not occurred any time within the last several million years -- not something to be taken lightly.
The farther back in time we go, the more difficult it is to figure out how CO2 concentrations have changed, but that has not stopped some from attempting:
One thing that is clear is that further back in time, CO2 levels have been much, much higher, and the average global temperatures have also been much higher. Why does the CO2 concentration change so much? This is a big question whose answer involves many factors, but consider two that are relevant to what we'll learn about in this module. Photosynthesis only started in the Silurian (S on the timescale in the figure above), and photosynthesis is a major sink or absorber of atmospheric CO2. Sea level was much higher during the two big peaks in CO2 — this leaves less room for photosynthesis and it also decreases the planet's albedo, making it warmer. A warmer ocean cannot absorb atmospheric CO2 and instead, it releases it to the atmosphere.
In conclusion, from this brief look at the record of fossil fuel emissions and atmospheric CO2 concentrations, it is clear that we have cause for concern about the effects of the global CO2 "experiment". Because of this concern, there is a tremendous effort underway to better understand the global carbon cycle. In the remainder of this module, we will explore the global carbon cycle by first examining the components and processes involved and then by constructing and experimenting with a variety of models. The models will be relevant to the dynamics of the carbon cycle over a period of several hundred years -- these will enable us to explore a variety of questions about how the system will behave in our lifetimes and a bit beyond.
The global carbon cycle is a whole system of processes that transfers carbon in various forms through the Earth’s different parts. The carbon that is in the atmosphere in the form of CO2 and CH4 (methane) doesn’t stay in the atmosphere for long — it moves from there to other places and takes different forms. Plants use the CO2 from the atmosphere in photosynthesis to make carbohydrates and other organic molecules, and from there it may return to the atmosphere as CO2, or it may enter the soil as still different compounds that contain carbon. Some carbon is deposited in sedimentary rocks from the oceans, and much later, this carbon may be released to the atmosphere. So, carbon moves around — it flows — from place to place.
Because CO2 is such an important greenhouse gas, the way the carbon cycle works is central to the operation of the global climate system. Later in this module, we will work with a computer model of the carbon cycle to do experiments that will help us understand how it works, but it will help to begin with an overview of the carbon cycle from the systems perspective.
What is meant by a systems perspective? It just means that we focus on the places where carbon resides (the reservoirs, in systems terminology), how it moves from reservoir to reservoir, how much of it moves from place to place, and what controls those movements. This same perspective is behind the simple climate model we worked on in Module 3.
First, let’s consider the main reservoirs of carbon. These can be seen in the diagram below, where each box represents a different reservoir, and remember that in each of them, the carbon may be in very different forms. Note a GT or gigaton is a billion metric tons or 1015 grams which is a whole lot of carbon!
The mantle reservoir is huge and somewhat removed from the other reservoirs, thus we will not really bother with it. Among the other reservoirs, you can see that there is a huge range in the sizes. The ocean biota contain a very small amount of carbon relatively speaking, while sedimentary rocks contain a vast quantity (in the form of calcite — CaCO3 — that forms limestones, and coal, petroleum, etc.).
Now, let’s look at the system with the flows included, as arrows connecting the reservoirs:
The black arrows represent natural processes of carbon transfer, while the red arrows represent changes humans are responsible for. The magnitudes of the flows are shown in units of gigatons of carbon per year. The diagram as constructed here represents a steady state if we just consider the black arrows; the flows going into each reservoir are equal to the flows going out of the reservoir — in other words, there is a balance. We will step through this system, talking about the processes involved in the flows, but first, let’s try to learn something from the numbers themselves in this diagram.
First, just a bit more on the notion of steady state. The diagram below illustrates some simple systems, one of which is not in a steady state, and the others of which are.
In the first example, A, more (10 units per time) is subtracted from the reservoir than is added (5 units per time), and so over time, the amount in the reservoir will decline — it will not remain constant, or steady. In each time step, it will lose 5 units of whatever the material is. In the other two (B,C), the amount added is the same as the amount subtracted, so these reservoirs will be in a steady state. If you look at example C, you see that the sum of inflows (5+5) is equal to the outflow (10).
When a system is in a steady state, we can say something about the average time something will spend in the reservoir — this is called the residence time for the reservoir. Here is a simple example — if there are 40,000 students at Penn State, and 10,000 students enter each year and 10,000 students graduate each year, then the system is in a steady state. The residence time is the total number of students divided by either the number entering or graduating each year — this give 4 years as the average residence time. Here is a figure to explain the idea further:
The concept of residence time is a useful one in studying any kind of system because it tells us something about how quickly material is moving through a system, and more important, it tells us how quickly some part of the system, or the system as a whole, can respond to changes. If something has a short residence time, it can respond quickly to changes, whereas if it has a long residence time, it responds very slowly. Mathematically, the residence time is the same as something called the response time which, as the name implies, is a measure of how much time it takes for the system to respond to a change. The word "respond" in this context means "return to a steady state."
Although the residence time and the response time are often the same value, they represent different ideas. As we said earlier, the residence time is a measure of the average length of time something spends in a reservoir — like the average length of time a carbon atom spends in the atmosphere. The response time, instead, is a measure of how quickly something returns to steady state after some disturbance that knocks it out of steady state. So, response time is only meaningful in cases where a system has a tendency to remain in a steady state.
Now, let’s turn to the carbon cycle and consider some of the flows in and out of the reservoirs. What is the residence time for the atmosphere? To get this, we take the amount in the reservoir (750 GT) and divide it by the sum of the inflows or the outflows. Let’s take the outflows: 100 GT C/yr for photosynthesis + 90 GT C/yr going into the oceans. The residence time is thus:
750 GT/190 GTyr-1 = 3.9 years
This is a pretty short residence time. Now, let’s look at the deep ocean (which is the vast majority of the oceans) — its residence time is:
38000 GT/10 GTyr-1 = 3,800 years
We use 10 for the inflow/outflow value because we use the net of the water flux into and out of the deep ocean. The result, 3,800 years, is much longer than the atmosphere, and what this means is that the carbon cycle has some parts that respond quickly, but other parts that respond very slowly, and the very slow parts tend to put a damper on how quickly the other parts can change. In other words, if we suddenly inject carbon dioxide into the atmosphere, you might think that the short residence time of the atmosphere means that the excess CO2 can be removed very quickly, but because these reservoirs are linked together, it turns out that the deep ocean must return to its steady state before the atmosphere can get back to its steady state.
We're going to take a step backwards for a second and think about a much simpler system, but one that has some things in common with the global carbon cycle. First, however, we need to recognize that the natural carbon cycle is something that always varies a bit, but it has some important feedbacks in it that tend to make it stable or steady. And then, along come humans, burning an impressive amount of fossil fuel and creating a new flow in the carbon cycle. To get a sense of how a system with a tendency to remain in a steady state might respond to a new flow, we turn to a simpler model.
Our simple model is a water tub with a drain and two faucets.
One faucet represents the natural addition of water to the tub, while the other represents a new flow. The tub has a drain in it, and it takes out 10% of the water in a given time interval (the value of k is 0.1 and the drain flow is just the amount in the tub times k). When you first open the model, the extra faucet is set to zero, and if you run the model you will see that it is in a steady state — the faucet and drain are equal, so the amount in the reservoir remains constant. What happens to the system if you increase the faucet flow using the upper knob to the right of the graph? If the faucet flow increases, then at first it will be greater than the drain and the amount in the tub will increase; but as faucet flow increases, so will the drain flow, and it will eventually become equal to the faucet flow and the system will return to steady state, but one with more water in the tub. Give it a try, and see what happens. At steady state, you can write a simple equation that says:
F = k*W
It turns out that the response time for this system is just 1/k. In this example model, k = 0.1 so the response time is 10 time units (the units depend on how the flows are given — liters per second or minute). This response time is not the length of time required for the system to get into a steady state — it is the time required to accomplish 63%, or the change to the new steady state. Why 63%? There is some math behind this choice, but in essence, it is just a convention that helps us avoid the problem of picking the time when steady state is achieved, which is difficult since it does so asymptotically. The figure below shows how this looks. In the graphs below, the blue line for the water tub is hidden beneath the red curve of the drain — these have different scales on the left hand side, but they are the same exact shape, so one plots over the other.
I changed the faucet here to 3, and the water in the tub rose until it reached a value of 30, where the drain value then became equal to the faucet value and the system reached a new steady state. As shown by the blue arrow above, this takes 10 time units to accomplish 63% of the change.
So, this is a system that has negative feedback (the drain) that drives it to a steady state, which means that regardless of the values of the faucet or the drain constant, k, the system will find a steady state.
Now, look at what happens if we turn on a new faucet — add a new flow to a system that is in a steady state. In the figure below, you see what happens if we turn the extra faucet on for a bit and then turn it off.
The system gets thrown out of its steady state temporarily, but returns to the original steady state when the extra faucet is turned back off. Note that there is a lag time here — the water tub peaks about 5 time units after the faucet peaks (this is the sum of the two faucets). If we were to decrease k, then the response time would lengthen, and the lag time would also lengthen. Think of this spike in the extra faucet as being equivalent to a short-term addition of CO2 into the atmosphere. But what if we turn the extra faucet on and then leave it on for some time at a steady rate? This might be equivalent to us adding CO2 to the atmosphere and then keeping those emissions constant (sometimes referred to as the "stabilization" of emissions). We can simulate this scenario with our simple model, and the results are shown below.
What you see is that the system responds by increasing the water in the tub until a new steady state is reached. The length of time needed to achieve the new steady state is determined by the response time of the system, which again, is governed by the magnitude of the drain constant, k.
In the real carbon cycle, this response time is measured in tens of thousands of years. For example, remember that the residence time of the deep ocean is about 3,800 years. The response time of the whole carbon cycle must be much longer than this because CO2 emissions are cycled through more than just the deep ocean. Unfortunately you can't add up the residence time of the individual reservoirs to get the response time of the whole carbon cycle which is really what we want to know, the system is much more complex than this. The natural carbon cycle will find a new steady state (it will "stabilize") in response to our carbon emissions, but it will take many thousands of years to do so. In the meantime, the system will continue to change as it makes this adjustment.
Carbon moves through the terrestrial realm through five main processes, which are represented as blue arrows in the figure below:
We will briefly explore these processes, beginning with photosynthesis.
Since its origin over 3 billion years ago, photosynthesis has been one of the most important processes on Earth, helping to make our planet habitable, in stark contrast to the other planets. The basic idea is that plants capture light energy and use it to split water molecules and then combine the products with carbon dioxide to make carbohydrates, which are used for fuel and construction of plants; oxygen which is crucial to making Earth habitable, is a by-product of this reaction, which is summarized as:
This process takes places in the chloroplasts located in the interiors of leaves. Here, chlorophyll absorbs solar energy in the red and blue parts of the spectrum. This energy is then used to split a water molecule into hydrogen and oxygen; in the process, the plants gain chemical energy that is used in a companion process that converts carbon dioxide into carbohydrates represented by C6H12O6 in the above equation.
The rate of consumption of CO2 by photosynthesis is mainly a function of water availability, temperature, the concentration of CO2 in the atmosphere, and key nutrients such as nitrogen. The importance of water in plant growth is obvious from looking at the equation above. Temperature is an important factor in many life processes, and photosynthesis is no exception. As a general rule, the rates of most metabolic processes increase with temperature, but there is usually an upper limit where the high temperatures begin to destroy important enzymes, or otherwise inhibit life functions. The fact that photosynthesis depends on the concentration of CO2 is not obvious, but it is very important. Plants take in their CO2 through small openings about 10 microns in diameter called stomata, which the plant can control like valves, opening and closing to adjust the rate of transfer. The more they let in, the faster the rate of photosynthesis and the faster the growth — but if they open their stomata wide to let in a lot of CO2, they can lose a lot of water, which is not so good. However, if there is a greater concentration of CO2 in the atmosphere, then the plants will get a good dose of CO2 by opening their stomata just a little bit, allowing them to conserve water. What this amounts to is increased efficiency of growth at higher levels of CO2. We call this effect CO2 fertilization, and it is an important way in which plants are our friends in helping to minimize the rise of CO2 in the atmosphere. You can see this effect in the graph below, which shows the theoretical relationship between the CO2 concentration in the atmosphere and the uptake of carbon from photosynthesis by land plants, summed up for the whole globe.
The following video explains photosynthesis in great detail:
If the video does not play above, click here to be directed to the Photosynthesis video on YouTube [155].
If we think of photosynthesis as the process of making fuel (carbohydrates), then respiration can be thought of as the process of burning that fuel — using it for maintenance and growth. This process can be described in the form of a reaction, just like photosynthesis. The chemical reaction is just the reverse of photosynthesis.
Through respiration, plants (and animals) release water, carbon dioxide, and they use up oxygen. Do the carbon flows involved in respiration and photosynthesis balance each other, as the equations seem to imply? The answer is no — otherwise, how could organisms grow?
Experiments on a variety of plants indicate that the ratio of photosynthesis to respiration is generally about 2 to 1. When plants are young, and growing rapidly, but with not much biomass to maintain, this ratio is even higher; in older, larger plants, this ratio is lower since more carbon needs to go towards maintenance.
Dead plant material enters the soil in two ways -- it falls on the surface as litter, and it is contributed below the surface from roots. The relative importance of these two pathways into the soil varies according to the plants in an ecosystem, but it appears that the two are commonly about equal, which may seem a bit surprising since loss of organic carbon from root systems is a process that we generally don't see. The flow of carbon associated with litter fall is roughly the difference between the photosynthetic uptake of carbon and the return of carbon through plant respiration. If this were not the case, then the size of the global land biota reservoir would be growing or declining, and although some regions are growing, others are shrinking, and they nearly balance out.
Respiration (sometimes called decay) also occurs within the soil, as microorganisms consume the dead plant material. In terms of a chemical formula, this process is the same as described above for plant respiration (the reverse of photosynthesis).
There is an unseen but fascinating universe of microbes living within the soil, and they are the key means by which nutrients such as carbon and nitrogen are cycled through the soil system. A great diversity of microorganisms live in the soil, and they are capable of consuming tremendous quantities of organic material. Much of the organic material added to the litter (the accumulated material at the surface of the soil) or within the root zone each year is almost completely consumed by microbes; thus there is a reservoir of carbon with a very fast turnover time — on the order of 1 to 3 years in many cases. The by-products of this microbial consumption are CO2, H2O, and a variety of other compounds, and are collectively known as humus (not the same as Hummus, the Mediterranean chickpea purée!). Humus is a much less palatable compound, as far as microbes are concerned, and is not decomposed very quickly. After it is produced at shallow levels within the soil, it generally moves downward and accumulates in regions of the soil with high clay content. Part of the reason it accumulates in the lower parts of the soil is that there tends to be less oxygen in that environment, and the lack of oxygen makes it even more difficult for microbes to work on this humus and decompose it further. But eventually, due to various processes (animals burrowing, people plowing, etc.) that stir the soil, this humus moves back up to where there is more oxygen, and then the microbes will eventually destroy the humus and release some more CO2. This humus then constitutes another, longer-lived reservoir of carbon in the soil. Carbon 14 (14C) dates on some of this soil humus give ages of several hundred to a thousand years old. Taken together, the fast and slow decomposition processes, both driven by microbes, lead to an average carbon residence time of around 20 to 30 years for most soils. The data used in our global carbon cycle model lead to a residence time of about 26 years for the global soil carbon reservoir.
These microbes (considered in terms of their respiratory output) are very sensitive to the organic carbon content of the soil as well as the temperature and water content, respiring faster at higher carbon concentrations, higher temperatures and in moister conditions.
In recent years, increasing attention has been directed at permafrost soil carbon, since the polar regions are warming much faster than the rest of the globe. Permafrost is soil that has been frozen for at least two years. As the permafrost melts, carbon that was added to these soils by processes like litter fall will become available for soil microbes to respire and release to the atmosphere. In fact, it is almost surely happening already, but given that much of the permafrost is still frozen, we have probably not seen the real manifestation of this source of carbon. Estimates are variable, but a figure like 1000 to 1500 Gt of carbon reflects the current thinking; this is a huge amount of carbon and has the potential to significantly alter the future of atmospheric CO2 levels. As the permafrost begins to melt, some estimates are that it will contribute something in the range of 2-5 Gt C/yr, which is large compared to the human-related changes. Of course, some of this released carbon will be offset by new carbon sequestered into these formerly frozen soils, but initially, the system will not be in equilibrium and these regions can be expected to be a net source of CO2 to our atmosphere.
Although most of the carbon loss from the soil reservoir occurs through respiration, some carbon is transported away by water running off over the soil surface. This runoff is eventually transported to the oceans by rivers. The actual magnitude of this flow is a bit uncertain, although it does appear to be quite small. The most recent estimates place it at 0.6 Gt C/yr.
Here are the terrestrial flow processes once again, but this time with the estimated magnitudes of flows included. Most of these are very large flows, and they have a seasonality to them — photosynthesis is obviously big in the growing season, but it is small in the winter. If land masses and land plants were equally divided in the northern and southern hemispheres, this seasonality would cancel out, but in today's world, a large majority of land and plants are in the Northern Hemisphere, so around July, photosynthesis is very strong. In fact, this on and off aspect of photosynthesis is the primary reason for the seasonal changes in atmospheric CO2 concentrations seen in the Mauna Loa record.
Far less obvious to us than the terrestrial processes we just discussed, the cycling of carbon in the oceans is tremendously important to the global carbon cycle. For example, the oceans absorb a large portion of the CO2 emitted through anthropogenic activities. As with the terrestrial part of the global carbon cycle, we will explore here the various processes involved in transferring carbon in and out of the oceans.
Below, we see a general depiction of the flows involved in the oceanic realm, along with the flow magnitudes.
Carbon dioxide can be dissolved in seawater, just as it can be dissolved in a can of soda. It can also be released from seawater, just as the CO2 from soda can also be released. This transfer of gas back and forth between a liquid and the atmosphere is an extremely important process in the global carbon cycle, since the oceans are such an enormous reservoir with the potential to store and release significant quantities of CO2.
The exchange of a gas like CO2 between the air and seawater is governed by the differences in concentrations, as shown in the figure below, where the solid red line represents the concentration (increasing to the right) in the air and in the ocean. The dashed line indicates what the concentration might look like after some exchange of CO2 occurs — lower concentration in the atmosphere, and higher in the oceans. But note that the change in concentration is less in the ocean than in the atmosphere, which is a result of some chemistry we will explore in just a bit.
As depicted in the figure above, the concentrations in the air and the sea are relatively constant (spatially, not temporally) since these two media undergo rapid and turbulent mixing that would tend to even out any systematic variations. The exception to this is a thin layer of water, just 20 to 40 microns thick (a micron is one thousandth of a millimeter), which, because of the surface tension of the water, is unable to mix well. This stagnant film is the barrier across which the diffusion has to occur. The rate of gas transfer is a function of the concentration difference and the thickness of the stagnant film of water (which thins when the winds are strong).
In general, this flow depends on the concentration of CO2 in the atmosphere and the oceans, but it gets a little complicated because the concentration of CO2 in seawater depends on a number of other factors. To understand this process, and to see why the atmosphere's concentration changes more than the ocean, we need to have some sense of what happens to CO2 once it gets dissolved in seawater.
When CO2 from the atmosphere comes into contact with seawater, it can become dissolved into the water where it undergoes chemical reactions to form a series of products, as described in the following:
The amount of CO2 as dissolved gas is what controls the concentration of CO2 shown in the figure above. This concentration depends strongly on the temperature — it is low in cold water and it is high in warm water, which means that the colder parts of the ocean absorb CO2 from the atmosphere and the warm parts of the ocean release CO2 into the atmosphere.
All of these reactions mean that in seawater, you can find all of these different forms or species of inorganic carbon co-existing, dissolved in seawater. In reality, though, bicarbonate (HCO3-) is the dominant form of inorganic carbon; carbonate (CO32-) and dissolved CO2 are important, but secondary (see figure below).
In the above equations (1-3), the double-headed arrows mean that the reactions can go in both directions, and generally do, until some balance of the different compounds is achieved -- a chemical equilibrium.
Notice that in the equations above, the hydrogen ion appears in a number of places — the concentration of H+ in seawater is what determines the pH of the water, or in other words, its acidity. Remember that low pH means more acidic conditions. It turns out that the ratio of bicarbonate (HCO3) to carbonate (CO3) is proportional to the pH; at lower pH conditions, more of the carbon is in the form of bicarbonate than carbonate, and the result is that the fraction of dissolved CO2 gas also goes up, and this tends to cause CO2 gas to move from the seawater into the atmosphere. As we will see in Module 7, higher acidity (lower pH) also has important implications for organisms that live in the sea.
Obviously, the ratio of these two forms of carbon is important, so we need to ask what controls this. Without getting into the chemistry too much, the ratio depends on two main things, and it depends on these things because they determine the electrical charge balance in the water — the charges from all the positive and negative ions dissolved in the water have to add up to zero, and you can see that if you have more carbonate (minus 2 charge) than bicarbonate (minus 1 charge), you have more total negative charge. So, the 2 important factors here are:
If the alkalinity is high, then more of the DIC has to take the form of carbonate (minus 2 charge), and a result is that the pH is higher, meaning less acidic. If the total DIC is large, then to get the charges to balance, more of the DIC has to be in the form of bicarbonate (minus 1 charge), which leads to lower pH and more acidic conditions.
Let’s return now to the figure that showed the exchange of CO2 between the atmosphere and ocean and the question about why the ocean’s CO2 concentration changed less than the atmosphere. The reason for this is the carbon chemistry reactions that shift some of the CO2 dissolved gas into the forms of bicarbonate and carbonate — this reduces the amount of carbon in the form of dissolved CO2 gas, so the concentration of dissolved CO2 does not increase as much as it would if these reactions did not take place.
As you can see, the chemistry of carbon in seawater is relatively complex, but it turns out to be extremely important in governing the way the global carbon cycle operates and explains why the oceans can swallow up so much atmospheric CO2 without having their own CO2 concentrations rise very much.
Let's see if we can summarize this carbonate chemistry — it is important to have a good grasp of this if we are to understand how the global carbon cycle works.
In the real world, there are important variations in the gas transfer between the ocean and atmosphere. This can be seen in the figure below, which represents a kind of snapshot of this transfer across the globe. The units here are grams of C per m2 per year, and each box is about 1e6 m2. The red, orange, and yellow colors represent places where the oceans are giving up CO2 to the atmosphere; the blue and purple areas are places where the oceans are sucking up atmospheric CO2. Summing these up, we find that the oceans are taking up around 92-93 Gt C/yr and they are releasing about 90 Gt C/yr — for a net flow of 2-3 Gt C/yr into the oceans — this represents something like 25-30% of the carbon we are adding to the atmosphere by burning fossil fuels. This exchange is variable in space and time, but a few general features can be pointed out. In general, the colder parts of the oceans absorb CO2 and the warmer parts release CO2 into the atmosphere. This makes sense because CO2 is more soluble in colder water.
The surface waters of the world’s oceans are home to a great number of organisms that include photosynthesizing phytoplankton at the base of the food chain. These plants (and cyanobacteria) utilize CO2 gas dissolved in seawater and turn it into organic matter, and just like land plants, phytoplankton also respires, returning CO2 to the surface waters.
At the same time, many planktonic organisms extract dissolved carbonate ions from seawater and turn them into CaCO3 (calcium carbonate) shells. When these planktonic organisms die, their soft parts are mainly consumed and decomposed very quickly, before they can settle out into the deeper waters of the oceans. This decomposition thus returns carbon, in the form of CO2, to seawater.
However, some of the organic remains and the inorganic calcium carbonate shells will sink down into the deep oceans, thus transferring carbon from the shallow surface waters into the huge reservoir of the deep oceans. This transfer is often referred to as the biologic pump, and it causes the concentration of CO2 gas, and also DIC in the surface waters to be less than that of the deeper waters. This can be seen in the figure below, which shows the vertical distribution of DIC (and also alkalinity) in a profile view for some of the major regions of the world's oceans.
Why is the alkalinity reduced in the surface waters? For the same reason that DIC is depleted. Planktonic organisms make shells of CaCO3, and when these sink to the seafloor, they carry Ca2+ ions with them, thus reducing the alkalinity. Much of this CaCO3 is later dissolved when it reaches deeper parts of the oceans, which explains the higher alkalinity values in the deep waters, as seen in the figure above. By controlling the concentration of CO2 gas dissolved in the surface waters, the planktonic organisms exert a strong influence on the concentration of CO2 in the atmosphere. For instance, if the biologic pump were turned off, atmospheric CO2 would rise to about 500 ppm (relative to the current 408 ppm); if the pump were operating at maximum strength (i.e., complete utilization of nutrients), atmospheric CO2 would drop to a low of 140 ppm. Clearly, this biologic pump is an important process.
Alkalinity is very important, because when it is reduced, species that make shells out of CaCO3 will dissolve. Today, the most sensitive species are those that make shells out of a form of CaCO3 called aragonite which is more susceptible to dissolution than calcite, the other form of CaCO3. Coral is an aragonite group that is very susceptible as we will see in Module 7.
What controls the strength of this biologic pump? The photosynthesizing plankton require nutrients in addition to CO2 in order to thrive; specifically, they require nitrogen and phosphorus. Most of these plants need P, N, and C in a ratio of 1:16:125, and since at present the ratio of P to N in ocean water is about 1:16, both P and N limit the growth of these phytoplankton. Photosynthetic activity of plankton can be mapped out by satellites tuned to record differences in water color due to the presence of chlorophyll. This distribution is shown in the figure below.
If the nutrients in seawater were being utilized to the maximum extent possible, there would be practically no P and N dissolved in seawater. But in fact, as shown by the figure below, the concentration of P tells us that the biologic pump is not operating at maximum efficiency. In the map below, the purple regions represent regions with no phosphate in the surface water of the oceans, meaning that there is simply a lack of nutrients, or that all the nutrients are utilized. In particular, it is the cold, polar regions that are not utilizing all of the available nutrients. This may be due in part to the temperature, but it may also be related to a paucity of iron, a minor nutrient that is apparently lacking in the colder regions, especially in the Southern Ocean ringing Antarctica.
In addition to nutrients, the strength of this biological pump is sensitive to the pH of the ocean water. The organisms in the oceans are adapted to a pH of around 8.3 or 8.4, but as we discussed earlier, if the oceans take up too much CO2 too quickly, the pH will decrease and move it outside the optimum range for the organisms in the oceans. Thus, lower pH levels (i.e., higher acidity) will probably mean a reduction in the strength of the biological pump, which will, in turn, limit the oceans ability to absorb more carbon.
As mentioned above, downwelling (sinking of dense water) transfers cold and/or salty surface waters into the deep interior of the oceans, and as a result, carbon is transferred as well. The magnitude of the flow is thus a function of the volume of water flowing and the average concentration of carbon in the cold surface waters, which is itself a function of the total amount of carbon stored in this reservoir, assuming that the size of the reservoir is not changing appreciably over the few hundred years that this model is intended to be used for.
Downwelling occurs primarily near the poles, where surface waters are strongly cooled by contact with the air. This cooling leads to a density increase. The formation of ice from seawater at the margins of Antarctica increases the salinity of the seawater there, adding to the density of the water. This dense water then sinks and flows through the deep oceans, effectively mixing them on a timescale of about 1000 years or so (the Atlantic Ocean mixes somewhat faster, which helps explain the smaller ΣCO2 and alkalinity gradients seen in the figure above).
Upwelling is just the opposite of downwelling, and as deep waters rise to the surface, they bring with them carbon, nutrients, and alkalinity. The total transfer of carbon is thus a function of the volume of water involved in this flow and the amount of carbon stored in the deep ocean reservoir. Upwelling occurs in areas of the oceans where winds and surface currents diverge, moving the surface waters away from a region; in response, deep waters rise up to fill the "void". Upwelling occurs along the equator, where there is a strong divergence, and also along the margins of some continents, such as the west coast of South America. This upwelling water also brings with it nutrients such as nitrogen and phosphorus, making these waters highly productive.
Note that the amount of carbon transferred by this flow is greater than the downwelling flow. This is not because the volume of flow is different in these two processes, but rather because the concentration of carbon in the deep waters of the ocean is greater than that in the shallow surface waters, due in part to the operation of the biologic pump mentioned above.
Some of the carbon, both organic and inorganic (i.e., calcium carbonate shells) produced by marine biota and transferred to the deep oceans settles out onto the sea floor and accumulates there, eventually forming sedimentary rocks. The magnitude of this flow is small — about 0.6 Gt C/yr — relative to the total amount transferred by sinking from the surface waters — 10 Gt C/yr. The reason for this difference is primarily because the deep waters of the oceans dissolve calcium carbonate shell materials; below about 4 km, the water is so corrosive that virtually no calcium carbonate material can accumulate on the seafloor. In addition, some of the organic carbon is consumed by organisms living in the deep waters and within the sedimentary material lining the sea floor. This consumption results in the release of CO2 into the bottom waters and thus decreases the amount of carbon that is removed from the ocean through this process. It is worth noting that the process of organic carbon consumption on the seafloor is another microbial process and is very similar to the soil respiration flow described earlier. Since the microbes living on the seafloor require oxygen to efficiently accomplish this task, the supply of oxygen to the seafloor by deep currents is an important part of this process.
When sedimentary rocks deposited on oceanic crust are subducted [160] at a trench where two tectonic plates of Earth’s surface converge, they may melt or undergo metamorphism; in either case, the carbon stored in calcium carbonate -- limestone -- is liberated in the form of CO2, which ultimately is released at the surface. The CO2 may come out when a volcano erupts, or it may slowly diffuse out from the interior via hot springs, but in both cases, it represents a transfer of carbon from the reservoir of sedimentary rocks to the atmosphere. The magnitude of this flow is quite small and is adjusted here to a value of 0.6 Gt C/yr in order to create a model in steady state. This flow is defined as a constant in the model, although in reality, it will vary according to the timing of large volcanic eruptions. An extremely large volcanic eruption may emit carbon at a rate of around 0.2 Gt C/yr for a year or two, creating a minor fluctuation.
Humans have exerted an enormous influence on the global carbon cycle, largely through deforestation and fossil fuel burning. In this section, we explore how these processes have led to changes in the dynamics of carbon in the atmosphere.
Another pathway for carbon to move from the sedimentary rock reservoir to the atmosphere is through the burning of fossil fuels by humans. Fossil fuels include petroleum, natural gas, and coal, all of which are produced by slow transformation of organic carbon deposited in sedimentary rocks — essentially the fossilized remains of marine and land plants. In general, this transformation takes many millions of years; most of the oil and gas we now extract from sedimentary rocks is on the order of 70-100 million years old. New fossil fuels take a very long time to form, and we are using them up much, much faster than they are being formed, meaning that if we keep using fossil fuels at the rate we are today, we will run out! This run out date depends on new discoveries and our ability to extract fuels more efficiently by processes like fracking, but we will be close to running out late this century.
These fossil fuels are primarily composed of carbon and hydrogen. For instance, methane, the main component of natural gas, has a chemical formula of CH4; petroleum is a more complex compound, but it, too, involves carbon and hydrogen (along with nitrogen, sulfur, and other impurities). The combustion of fossil fuels involves the use of oxygen and the release of carbon dioxide and water, as represented by the following description of burning natural gas:
CH4 + 2O2 => CO2 + 2H2O
Beginning with the onset of the industrial revolution at the end of the last century, humans have been burning increasing quantities of fossil fuels as our primary energy source.
As a consequence, the amount of CO2 emitted from this burning has undergone an exponential rise that follows the exponential rise in the human population. The magnitude of this flow is currently about 9 Gt C/yr. This number also includes the CO2 generated in the production of cement, where limestone is burned, liberating CO2.
As you can see in the graph above, this flow has changed considerably over time, as human population has increased and as our economies have become more industrialized with a big thirst for the energy provided from the combustion of fossil fuels. The model we will work within the lab activity for this module includes this history, beginning in 1880 and going up to 2010; beyond 2010 is the realm of future projections, which can be altered to explore the consequences of choices we might make or not make in the future. Part of the new energy economy that is key to our future is the use of so-called renewable energy sources, including wind, solar and geothermal energy, that emit little or no carbon. There are some interesting map view representations of this history of fossil fuel carbon emissions in the video below:
If the video does not play, you can view it at YouTube: Annual Fossil-Fuel CO2 Emissions [164]
The other form of human alteration of the global carbon cycle is through forest cutting and burning and the disruption of soils associated with agriculture. When deforestation occurs, most of the plant matter is either left to decompose on the ground, or it is burned, the latter being the more common occurrence. This process reduces the size (the mass) of the land biota reservoir, and the burning adds carbon to the atmosphere. Land-use changes other than deforestation can also add carbon to the atmosphere. Agriculture, for instance, involves tilling the soil, which leads to very rapid decomposition and oxidation of soil organic matter. This means that in terms of a system, we are talking about two separate flows here — one draining the land biota reservoir, the other draining the soil reservoir; both flows transfer carbon to the atmosphere. Current estimates place the total addition to the atmosphere from forest burning and soil disruption at around 2-3 Gt C/yr; estimates divide this into 70% to 50% forest burning, with soil disruption making up the remainder. Deforestation is a particular problem in the Amazon, as we will see in Module 9.
The actual history of this alteration to the natural carbon cycle is not well-constrained — not nearly as well known as the fossil fuel burning history — but we include a reasonable history that reflects patterns of land settlement and forest clearing.
The goals of this lab are to:
Please make sure that you read the Introduction to the lab. Skipping it will result in a lot of confusion and a lower score on the assignment.
In the lab activity for this module, we will be working with a STELLA model of the global carbon cycle that is attached to the climate model we used in Module 3. This model incorporates the processes of carbon transfer in the terrestrial and oceanic realms discussed in the previous sections; it also includes the history (from 1880 to 2010) of human impacts on the carbon cycle in the form of emissions from burning fossil fuels, burning forests, and disrupting the soil. The model is initially set up to represent the carbon cycle in a steady state just before the industrial revolution, at which point human alterations to the carbon cycle began in earnest. We will use this model to explore different carbon emissions scenarios for the future, to see how the climate and carbon cycle might respond.
Here is what the model looks like, in a very simple, stripped-down form:
In this figure, you can see the initial amounts of carbon in each reservoir in GT and the annual flows of carbon between reservoirs in GT C/yr. A couple of things should be pointed out about this model. In some cases, flows have been combined into things called bi-flows that have arrows on each end; this means that carbon can flow either way. There is a bi-flow connecting the atmosphere and surface ocean that represents the two-way transfer of ocean-atmosphere exchange. There is another bi-flow connecting the surface and deep ocean that represents upwelling and downwelling combined into one.
The real model is quite a bit more complex-looking, as can be seen below:
The complications here arise from the fact that most of the flows are expressed by rather complicated equations. Most of the flows have small red arrows attached to them; these show you what things affect the flow rate. Think of the circle in the middle of the flow pipe as being a valve on a water pipe that controls how much water moves through the pipe. In many cases, the red arrows come from the reservoir that is being drained by the flow; this means that the outflow is dependent on how much is in the reservoir — when you have more in a reservoir, the outflow is often greater, and in essence, the outflow is a percentage of how much is in the reservoir.
Note that the atmosphere reservoir is connected to a converter called pCO2 atm — this is the concentration of CO2 in the atmosphere and the units are in parts per million or ppm, the same units that CO2 concentrations are typically given in. In this model, the initial amount of carbon in the atmosphere gives a pCO2 value of 280 ppm (and by now, it is just over 400 ppm). The pCO2 atm converter is in turn connected to the same climate model we used in Module 3, where it determines the strength of the greenhouse effect. The climate model calculates the temperature at each moment in time and then passes that information back to the carbon cycle model in the form of a converter called global temp change, which is the change in global temperature relative to the starting temperature — this is like a temperature anomaly.
The global temp change converter is then attached to a couple of other converters that attach to the photosynthesis flow and the soil respiration flow. Both of these flows are sensitive to temperature and the temperature combines with something called a temperature sensitivity. You can see something above called the Tsens sr — this is the temperature sensitivity for soil respiration. Both photosynthesis and soil respiration are sensitive to the temperature; they increase with temperature. Global temp change is also connected to the surface temperature of the oceans (T surf) via a "ghosted" version of the converter — a dashed line version that helps eliminate so many long connecting arrows running all over the diagram.
The photosynthesis flow has lots of converters associated with it since it is dependent on numerous factors, but the main things are temperature and the atmospheric CO2 concentration.
The model also includes a whole set of connected converters in the upper right that do all of the calculations related to the carbon chemistry in the oceans; this is where the pH or acidity of the surface ocean is calculated.
At the very top right of the model, there is a converter called Observed Atm CO2 that contains the observed history of atmospheric CO2 concentration since 1880. This is in the model so that we can test how good the model is. If our carbon cycle model is good, then it should calculate a pCO2 that closely matches the observed record.
The model includes the history of carbon emissions from burning fossils fuels shown in the previous section; it also includes a history of land use changes that impact the carbon cycle. These land use changes are broken up into tree burning (that accompanies deforestation) and soil disruption (related to farming); they are then additions to the flow of carbon from the land biota and soil back into the atmosphere. These human alterations to the carbon cycle are shown in the model by clicking on the pink graph icons labeled ffb and land use changes on the right side of the model, as shown below.
As before, the model we will work runs on a browser; here is a link to the model [166]. When you follow the link, you should see a screen like this (note the model has changed but the functions are similar):
You can think of this as the control panel for the model, where you can run it, stop it, make changes, and look at the results in the form of different graphs. You can access the different graphs by clicking on the triangular tab at the lower left of the graph window. The controls here consist of a set of switches you can turn on or off — these will determine which of 3 different emissions scenarios are applied to the model. The three buttons on the right-hand side show what the 3 emissions scenarios look like. The video below explains how to work the switches. There is also a switch that can be used to either include or exclude the carbon emissions to the atmosphere from human-related land-use changes such as deforestation and soil disruption, but we will not mess around with this switch — just leave it in the "on" or up position as it is in the diagram above.
This initial model is set up to run from 1880 to 2100. Later, we will work with a model that runs farther into the future.
Download this lab workbook as a Word document: Graded Lab Module 5. [167] (Please download required files below.)
Answer the questions in the workbook. Then, when you are ready, complete the Module 5 Lab 5 Submission in a timed environment. Make sure you have the model ready to run, as we will be asking additional questions as indicated below.
To begin with, we will use this model [168]of the carbon cycle (which is coupled to the same climate model we used in Module 3) to learn a few basic things about how the model responds to different scenarios of carbon emissions from fossil fuel burning (FFB). Note, the A2 scenario is also known as “Business-as-Usual” (BAU) in which we make no efforts to limit carbon emissions. Be sure to watch the video that introduces this model and explains how to use the switches to change the FFB scenarios. To get the right answers, it is imperative that you have switches in the correct positions. If in doubt, try reloading the model (reloading the web page) and restoring all devices.
Before running the model, try to guess what will happen to global temperature change under the two reduced emissions scenarios — in one of them (the leveling off scenario), the emissions are held constant after 2010, and in the other (the halt scenario), they drop to zero after 2010.
1. Will the global temperature change level off by 2100 if the emissions level off?
2. Will the global temperature change drop to 0 by 2100 if the emissions drop to 0?
Now run all three emissions scenarios (keep the land use switch in the on position) in the following sequence: A2 (BAU), then FFB Leveling Off, followed by FFB Halt.
3. Focus on the second scenario — what happens to the global temperature change when the FFB emissions level off (leveling off begins in 2010)?
4. Now turn to the FFB Halt scenario — what happens to the global temperature change when the FFB emissions halt (the halt begins in 2010)?
5. QUESTION IN CANVAS SUBMISSION ONLY.
6. QUESTION IN CANVAS SUBMISSION ONLY.
7. QUESTION IN CANVAS SUBMISSION ONLY.
Now we turn to a version of the model [169] that has three emissions scenarios from the IPCC.
Again, the A2 scenario is also known as “Business-as-Usual” in which we make no efforts to limit carbon emissions; the A1B scenario is one of modest reductions in emissions; the B1 scenario is one of more drastic reductions. Run all three emission scenarios (A2, A1B, and B1) with the permafrost switch off and the land use switch on, then answer the following questions.
Temperature (Page 1 of the graph pad)
8. Which scenario produces the largest warming in 2100?
9. Which scenario produces the smallest warming in 2100?
10. What is the temperature difference between the highest and lowest emission scenario? (answer to 2.d.p)
11. QUESTION IN CANVAS SUBMISSION ONLY.
12. QUESTION IN CANVAS SUBMISSION ONLY.
Atmospheric CO2 (pCO2 atm on Page 2 of the graph pad)
13. Which emission scenario has the largest impact on drawing down CO2?
14. When does that decrease begin?
15. When does the rate of CO2 increase in A1B start to decrease?
16. QUESTION IN CANVAS SUBMISSION ONLY.
17. QUESTION IN CANVAS SUBMISSION ONLY.
pH (Page 3 of the graph pad)pH is a measure of the acidity of the ocean — it is related to the amount of CO2 dissolved in the oceans. More CO2 in the oceans lowers the pH, which means the water is more acidic (a phenomenon known as Ocean Acidification). We will see in Module 7 that the key variable controlling the precipitation of reefs and other organisms that make shells of CaCO3 is a variable called saturation, which is indirectly related to the pH. Let’s assume for the next four questions that coral framework precipitation in a species of coral declines at a pH of 8.0 and that it can no longer form any below a pH of 7.8 (again this is hypothetical since saturation is key).
18. List the emission scenarios that result in a slow-down in shell precipitation at any time during the model run (i.e., pH drops below 8.0).
19. Which of the scenarios will stop the growth of coral reefs?
20. What year does your answer from the previous question happen?
21. QUESTION IN CANVAS SUBMISSION ONLY.
22. QUESTION IN CANVAS SUBMISSION ONLY.
Now, we will see what impact permafrost melting might have on the carbon cycle and climate. First, hit the “refresh” button on the browser to return the model to its original state. Run the A2 scenario with the PF switch in the off position, and then run it again with the PF switch turned on.
23. How much additional warming is caused by 2100 in the A2 scenario by the permafrost melting?
24. How much does the pCO2 atm increase by 2100 as a result of the permafrost melting?
Hit the refresh button again, and then run the B1 scenario (the one with the more drastic reductions) with the PF switch off, then run it again with the PF switch on.
25. How much additional warming is caused by 2100 in the B1 scenario by the permafrost melting?
26. How much does the pCO2 atm increase by 2100 as a result of the permafrost melting?
This result might be a surprise to you, but remember how CO2 affects the climate — an increase of something like 100 ppm is much more important at lower concentrations than higher concentrations. So, going from 200 ppm to 300 ppm causes more warming than going from 700 ppm to 800 ppm.
27. QUESTION IN CANVAS SUBMISSION ONLY.
28. QUESTION IN CANVAS SUBMISSION ONLY.
In this module, you should have grasped the following concepts:
You should have read the contents of this module carefully, completed and submitted any labs, the Yellowdig Entry and Reply, and taken the Module Quiz. If you have not done so already, please do so before moving on to the next module. Incomplete assignments will negatively impact your final grade.
Approximately 71% of the Earth’s surface is covered by water. Moreover, well over 99% of the total heat budget of the earth is contained in the ocean. We have already seen how heat is the engine behind climate dynamics. It logically follows, therefore, that the oceans are an integral part of climate. In this section, we will focus on the physics of ocean circulation and how it helps drive climate. We will show that the effect on climate can be very large (including the whole ocean) or more regional, limited to an individual storm system. Finally, we will focus on the ways in which the ocean is likely to change in the future and how that change will have a profound impact on climate.
If you have spent time at the beach or in a coastal city, you will likely appreciate the way in which the ocean affects the weather and climate of the coastal region. Oceans have tremendously high heat capacity, so they have a large damping effect on climate. In the summer, the ocean acts to cool the land, and in the winter, the opposite effect occurs and the land is kept warmer. Take coastal California, for example. In the months of June and July, San Francisco is one of the coldest locations in the United States, yet in the winter, the city is among the warmest places. This dichotomy is a result of the effect of heat from the Pacific Ocean influencing the climate of the California coastline.
On completing this module, students are expected to be able to:
After completing this module, students should be able to answer the following questions:
Below is an overview of your assignments for this module. The list is intended to prepare you for the module and help you to plan your time.
Action | Assignment | Location |
---|---|---|
To Do |
|
|
Humans have been involved in the scientific study of the ocean for hundreds of years. Most early ocean voyages were focused on exploration, colonization and economic gain. However, these early pioneers had to have an understanding of the ocean to survive their time at sea. It wasn’t until late in the 19th century that expeditions had the single goal of learning about the ocean. Charles Darwin’s famous five-year voyage on the HMS Beagle from 1831 to 1836 provided a literal treasure trove of information about the ocean and especially its geology and natural history. The HMS Challenger Expedition from 1872-1876 made even more important discoveries about oceanography.
World War II from 1939-1945 ushered in a new era in ocean exploration. The importance of naval operations, and particularly submarines in combat, provided a major incentive for a much more accurate understanding of the bathymetry of the ocean as well as the acoustic properties of seawater. The urgency of acquiring this information intensified in the Cold War interval from 1946 to 1991 when submarines played a central role in covert operations. Around the world, federal investment in oceanographic research ballooned, and research ships were built with the sole purpose of mapping the oceans and studying their properties. This age brought major discoveries of the ocean and its geology, the life it supports, its chemistry, and its effect on climate.
The scientific discovery of the oceans was originally focused on ships, but more recently our technological capabilities have surged, and now submersibles and unmanned remotely operated vehicles (ROVs) and autonomous underwater vehicles (AUVs) provide a relatively low operating cost means of acquiring many different types of data.
This technology is changing the way that oceanography is being conducted, as demonstrated in the following video. Click play to watch the video.
At the same time, satellites provide continuous information about the ocean surface, while data such as temperature and surface current velocity is obtained from long-term deployments of instruments attached to buoys. In addition, seismological data are acquired by seismometers anchored on the sea bottom. Nevertheless, modern oceanography still relies heavily on sea-going vessels for sampling waters and biotas from plankton to fish, for deploying instruments that measure in-situ properties, and for taking cores. We have already learned about such coring techniques in Module 1.
In the future, seagoing research vessels are likely to be extremely high-tech. For example, the SeaOrbiter is due to launch in the near future. Part giant ship and part giant submarine, the vessel will enable study of the sea surface simultaneously with its depths. One of the goals of the SeaOrbiter is to allow scientists to remain at sea and study underwater for extended periods, a pursuit not possible in a traditional submarine.
We begin this module with an overview of the properties of seawater and follow with a thorough discussion of the circulation of different water masses.
Most of you have spent time at the beach or taken a boat trip from the coast. The realm that you have seen is the surface layer of the ocean. This is the warm, generally illuminated layer that is exchanging heat and gases with the atmosphere. The surface layer extends down usually a few hundred meters and accounts for only a small fraction of the total ocean volume. As such, it is very much like the skin of a piece of fruit. Yet this layer is home for much of the marine food chain. At the base of the surface ocean, is a sharp temperature drop at a layer known as the thermocline. Below the thermocline, ocean deep waters, by contrast, are generally cold, dark, and inhospitable. However, the deep ocean plays a vital role in heat transport, and thus we will spend considerable time considering deep-water masses.
In terms of climate, the key property of seawater is its temperature. Temperature levels also help define the surface and deep ocean masses. Temperature of the surface ocean water varies seasonally, with warmer temperatures recorded in summer and colder temperatures in winter. When we study ocean temperatures, we usually consider the annual average. On this basis, surface ocean temperatures range from about 28oC in the tropical western Pacific to –3oC off the coast of Antarctica. Because seawater is salty, it does not freeze even though its temperature is below 0oC.
The following video shows real sea surface temperature data as well as the distribution of ice from 1985 to 1997. Note the way temperature bands and ice sheets shift from south to north with the seasons.
Temperature has a major influence on the density of seawater and thus plays a large role in deep ocean circulation, as we will see. Warm water expands and thus is less dense (less mass per unit area) than cold water.
A key property of seawater that also has a significant effect on the behavior of water masses is salinity. This variable is a measure of the amount of dissolved salts such as sodium, potassium, and chlorine in the water, and, like temperature, it has a major effect on seawater density. The higher the salinity (the more salt), the denser the seawater. The major factor that controls salinity at a location is the balance of evaporation and precipitation. Where evaporation is higher than precipitation, salinity levels are higher; where precipitation exceeds evaporation, salinity levels are lower. The units of salinity are parts per thousand (ppt or o/oo), which refers to the weight of dissolved salt (in grams) in 1000 grams (or a kilogram) of water. The most saline surface waters are found in basins in highly arid regions with high rates of evaporation, such as the Persian Gulf, the Red Sea, and the eastern Mediterranean. Lowest salinities are found in locations with high levels of precipitation and runoff from land, typically near where large rivers enter the ocean. Highly saline water has more salt contained between the water molecules, and thus is denser than low salinity water.
The circulation of the surface ocean is driven primarily by surface winds. As we have seen, winds blow from areas of high atmospheric pressure to regions of low atmospheric pressure. These winds are generally transferring heat from areas where there is excess incoming radiation (the tropics and subtropics) to temperate and higher latitude regions, where there is a net loss of heat. Typically speaking, the distribution of pressure on the Earth’s surface is zonal or meridional, with high-pressure bands covering the subtropics and polar regions and low-pressure bands, the equatorial regions, and subpolar regions.
Where winds and surface currents are moving along a coastline, they draw the surface water away from the coast. The surface waters are replaced by waters from below by the upwelling described earlier. This is shown in the figure below.
Upwelling also happens in parts of the ocean where winds cause surface currents to diverge or move away from one another. Downwelling is the opposite process to upwelling, where surface waters flow downwards and replace deep waters. This occurs in parts of the ocean where surface winds are converging. One place this happens is in the centers of gyres.
Winds generally blow out from the subtropics towards the equator and subpolar regions, and from the polar regions to the subpolar latitudes. Complicating matters is that the rotation of the Earth causes the winds to rotate as they move (the Coriolis effect). The rotation of the Earth causes an object to deflect towards the right (as viewed by a stationary observer) in the Northern Hemisphere, which results in a clockwise motion, and to deflect towards the left (as viewed by a stationary observer) in the Southern Hemisphere, which results in a counterclockwise motion. These rotations combined with the zonal distribution result in enormous, nearly ocean-scale major cells or gyres of surface winds.
As surface winds push the surface layer of the ocean with them, the surface wind gyres result in surface ocean current gyres. Along coastlines, the direction of movement of a gyre has a significant impact on continental climate. For example, a current moving from south to north in the Northern Hemisphere, or north to south in the Southern Hemisphere, will generally deliver warmer water to the coastal region, whereas a current moving from the north to south in the Northern Hemisphere or south to north in the Southern Hemisphere will generally deliver colder water. The flow of warm water will generally cause a larger moderating influence on coastal climate than will the flow of cold water. Take, for example, the Gulf Stream in the North Atlantic. This warm current has a major heating effect on the shores of Great Britain and other parts of Northern Europe, keeping these regions relatively balmy compared to locations at comparable latitudes. After it bathes the shores of Britain, the North Atlantic gyre bends towards the south, thus bringing relatively cold waters to the shores of Spain, Portugal, and Morocco further to the south, keeping these areas cooler than areas not influenced by the currents.
This NASA video provides an excellent summary of surface ocean currents:
The deep ocean is generally considered to include the ocean below a transition known as the thermocline. The thermocline is the sharp temperature decrease that lies at the base of the surface mixed layer, where waters are generally uniform in temperature as a result of convection. Deep-water masses are produced at the surface of the ocean and transported to depth via downwelling. Generally, downwelling occurs where the surface ocean is cool, or, rarely, unusually saline. Downwelling water travels along lines of equal density known as isopycnals and spreads out horizontally at the level where it is equal in density to the surrounding water mass.
The production of deep-water masses via downwelling occurs in high-latitude regions of the northern and southern hemispheres, where the surface ocean is cooled by winds. Wind moving over the water both cools it and causes an increase in evaporation. This evaporation targets just the water molecules, resulting in an increase in the salinity of the water. Falling temperature and increasing salinity render these surface water masses denser, allowing them to downwell. In certain locations, the formation of sea ice also causes an increase in salinity as the freezing removes fresh water, leaving the salt behind in a process known as brine exclusion. Pockets of salty water around the margins of the ice sink as a result of their higher density. Moreover, brine exclusion intensifies the cooling by wind.
Today there are three major deep ocean masses. North Atlantic Deep Water or NADW is mainly produced where the surface ocean is cooled in the Norwegian Sea in the northern part of the North Atlantic on the north side of a ridge that runs between Greenland, Iceland, and Scotland. This cooled water seeps through the ridge and downwells. Portions of NADW are also produced in the Labrador Sea and in the Mediterranean. This water mass is 1-2.5oC and 35 ppt. NADW travels down the west side of the North Atlantic Ocean at a depth of 2000-4000m and through the west side of the South Atlantic. Much of NADW upwells in the Southern Ocean, but portions join the Antarctic Circumpolar Current and travel at depth into the Indian and Pacific Oceans.
Antarctic Bottom Water or AABW is produced by evaporative cooling off the coast of Antarctica and under the Ross ice shelf. With this source, AABW is amongst the coldest water in the ocean with a temperature of -0.4oC. This water is relatively fresh (average 34.6 ppt). AABW travels northward along the western side of the South Atlantic underneath NADW. Some of the water mass spills over into the eastern part of the South Atlantic, while the remainder travels into the equatorial channel between South America and Africa.
The third major source of deep water is called Antarctic Intermediate Water or AIW. AIW is produced near the Antarctic Convergence or Polar Front, where downwelling occurs as a result of the convergence of surface currents. AIW has a temperature of 3-7oC and a salinity of 34.3 ppt. It travels a considerable distance northward into the Atlantic, Indian and Pacific Ocean basins.
There are numerous other deep water masses, especially at intermediate depths, for example, North Pacific Intermediate water. As deep-water masses travel through the ocean, they gradually mix with surrounding water masses. For example, NADW mixes with AABW and AIW.
Downwelling supplies oxygen to the deep ocean and therefore ventilates this body of water. It does not bring nutrients. Deep water currents generally move very slowly, with a velocity of several cm per second. Typically, surface currents move 10-100 times faster than this. At these rates, deep water currents take thousands of years to encircle the globe. In fact, the oldest deep water in the ocean (in the North Pacific) is about 1500 years old. As deep waters circle the globe, their properties change. They mix with waters around them, and their chemistry changes as they acquire nutrients such as phosphate and CO2 from decaying organic matter and lose oxygen.
The opposite process of downwelling is upwelling. Upwelling is where a deep-water mass that is lighter than waters around it rises to the level where it is no longer buoyant. This situation generally results when surface winds move the surface water masses away from a location, resulting in the upward movement of water from depth to fill the void. Upwelling is frequent in coastal regions, especially those in subtropical regions, where high pressure results in a dominant offshore wind flow. In addition, the ocean divergences where winds move surface current by Ekman transport are frequented by upwelling. Upwelling is crucial to the supply of nutrients to surface water masses, fueling high levels of productivity in the surface ocean. The most prolific fisheries of the world in coastal regions occur in nutrient-rich waters such as Peru and California and are supplied by upwelling.
As we have seen, the circulation of the deep ocean is driven by density differences that arise as a result of temperature and salinity of the different water masses. This type of circulation is known as thermohaline (temperature=thermo; haline=salt or salinity). Strictly speaking, since surface ocean currents are not driven by thermohaline mechanics but by winds and to a much lesser degree, tides, the circulation of the ocean as a whole is often called the meridional overturning circulation. However, we will continue to use the term thermohaline when addressing deep-water circulation.
Probably the most important property of seawater in terms of its effect on life in the oceans is the concentration of dissolved nutrients. The most critical of these nutrients are nitrogen and phosphorus because they play a major role in stimulating primary production by plankton in the oceans. These elements are known as limiting because plants cannot grow without them. However, there are a number of other nutrients that also play a role, including silicon, iron, and zinc. Nutrients in the ocean are cycled by a process known as biological pumping, whereby plankton extract the nutrients out of the surface water and combine them in their organic matrix. Then, when the plants die, sink and decay, the nutrients are returned to their dissolved state at deeper levels of the ocean. The abundance of nutrients determines how fertile the oceans are. A measure of this fertility is the primary production, which is the rate of fixation of carbon per unit of water per unit time. Primary production is often mapped by satellites using the distribution of chlorophyll, which is a pigment produced by plants that absorbs energy during photosynthesis. The distribution of chlorophyll is shown in the figure above. You can see the highest abundance close to the coastlines, where nutrients from the land are fed in by rivers. The other location where chlorophyll levels are high is in upwelling zones, where nutrients are brought to the surface ocean from depth by the upwelling process.
Another critical element for the health of the oceans is the dissolved oxygen content. Oxygen in the surface ocean is continuously added across the air-sea interface as well as by photosynthesis; it is used up in respiration by marine organisms and during the decay or oxidation of organic material that rains down in the ocean and is deposited on the ocean bottom. Most organisms require oxygen, thus its depletion has adverse effects for marine populations. Temperature also affects oxygen levels, as warm waters can hold less dissolved oxygen than cold waters. This relationship will have major implications for future oceans, as we will see.
The final seawater property we will consider is the content of dissolved CO2. CO2 is nearly opposite to oxygen in many chemical and biological processes; it is used up by plankton during photosynthesis and replenished during respiration as well as during the oxidation of organic matter. As we will see later, CO2 content has importance for the study of deep-water aging.
As we have seen, surface ocean currents are the dominant sources of deep water masses. In fact, it is a little more complicated than this, as other deep water masses also feed one another. However, in a generalized sense, the surface and deep ocean currents can be viewed as an integrated system known as the Global Conveyor Belt, a concept conceived by the brilliant Geoscientist Wally Broecker of Columbia University. Diagrams of the Global Conveyor Belt (GCB) are two-dimensional and therefore simplified and do not, for example, include all of the intermediate water masses or surface water currents. However, the key of the Global Conveyor Belt concept is that it explains the general systems of heat transport as well as bottom water aging and nutrient supply in the oceans.
The following animation traces the path of water through the surface and deep ocean, showing the dominant features of the GCB, including formation of NADW in the North Atlantic.
The GCB shows the dominant source of deep water in the oceans as North Atlantic Deep Water, and how this splits in two to flow into the Indian and Pacific Oceans. In these locations, upwelling of the deep water mass produces surface water currents that generally flow back towards the original source of deep water in the North Atlantic. For heat supply, the conveyor belt involves the transport of heat and moisture to northwest Europe by the Gulf Stream; this accounts for about 30% of the heat budget for the Arctic region, making the GCB extremely important for climate in the Arctic.
Because deep-water masses circulate very slowly, the GCB takes about 1500 years to complete, meaning that the oldest water in the oceans is about this age. In addition, because oxygen is gradually depleted in deep waters as they age, and because CO2 contents and nutrients conversely increase, the oldest water masses of the ocean in the North Pacific are among the most nutrient-rich, CO2 rich, and oxygen-depleted waters in the ocean. Conversely, the newly produced NADW waters are among the most nutrient-depleted, CO2 depleted, and well-oxygenated waters in the world.
As it turns out, recent research on the detailed configuration of surface and deep currents shows that circulation is much more complex than the GCB. Floats deployed in the ocean don’t always follow expected pathways in the GCB model. Wind actually plays a more significant role in causing downwelling than previously thought. Moreover, mixing by small systems or eddies plays a large role in driving surface currents.
El Niño, the most powerful control on weather across the planet, is rearing its ugly head as we speak. The telltale signs are building, and atmospheric scientists are predicting one of the most powerful El Niño events in history, certainly the strongest since 1998. This actually could be very good news for drought-stricken California, as El Niño typically delivers a lot of rain and snow to the state. In the case of California, the drought has been so severe that rains last winter caused massive mudslides. And the rain caused so much new vegetation to grow, and this served as fuel for the fires in Summer 2017. Southern Australia becomes very dry in El Niño years, and the drought causes great hardship for farmers and can often lead to very dangerous bushfires. And in Peru, El Niño comes with changing ocean currents that lead to the collapse of fisheries and great financial hardships. In this section, we learn about ocean circulation and how it impacts climate on a global scale.
One of the clearest examples of the influence of the oceans on climate is the El Niño Southern Oscillation (ENSO). This is the strongest inter-annual (i.e. periodic) variation in the modern climate system. The ENSO cycle lasts between two to seven years (average of 4 years) and corresponds to sea surface and thermocline temperatures and atmospheric pressure changes in the Pacific Ocean that ultimately have effects on weather and surface ocean conditions across the globe. The ENSO cycle consists of two main climatic intervals, the El Niño interval and the La Niña interval. The El Niño stage begins with an increase in sea surface temperatures across the eastern Pacific as warm waters spread from the west.
Typically, there is a very large east to west SST gradient across the Pacific, but El Niño virtually erases this gradient. For reasons that are not fully understood, the El Niño stage is characterized by high atmospheric pressure in the western Pacific. The opposing La Niña stage is characterized by relatively cool waters in the eastern tropical Pacific and low atmospheric pressure in the western Pacific. The westward trade wind flow piles up warm water in the western Pacific and this causes sea level to be higher by about 20 cm in the western compared to the eastern Pacific (so sea level is, relatively speaking, higher in the eastern Pacific during El Niño). To compensate for this wind flow, the thermocline slopes down upwards towards the eastern Pacific, resulting in greater stratification in the east. The slope of the thermocline shallowing changes markedly during ENSO. As El Niño piles up warm water in the eastern Pacific, the thermocline deepens in that location, whereas the La Niña part of the cycle corresponds to a very shallow thermocline in the eastern Pacific.
The following animation shows how an El Niño event in 1997 developed in the tropical Pacific Ocean:
Visualizing how three key datasets differ from normal conditions reveals the magnitude of the 1997-98 event and gives new insights into how the ocean and atmosphere couple to produce El Niño. First, we look at sea surface height. The gray sheet rises and dips as much as 30 centimeters from normal. Next, we map sea surface temperature. Red is 5 degrees Celsius above normal and blue 5 degrees below normal. Finally, black arrows are added to mark sea surface winds.
A weakening of the trade winds in the far western Pacific fuels an eastward moving wave. Sea level is raised as the windswept wave progresses. The waves arrival at the South American coast reduces the normal upwelling of cold water and warms the surface by as much as five degrees Celsius. Intense atmospheric convection associated with equatorial ocean warming causes local winds to converge, abating the trade winds strength and extending the warm pool of water into the eastern Pacific. Beneath the ocean surface, warm and cold waters are separated by the thermocline, a boundary at 20 degrees Celsius. El Niño flattens the thermocline, squeezing the warm bulge of water eastward into a long, shallow pool. As scientists collect more detailed data through efforts such as NASA's Earth observing system, visualization will be crucial in probing the elaborate interactions and far-flung effects of future El Niños.
As you may recall from Module 2, there is a good correlation between El Niño events and warmer temperatures on a global scale; in fact, a good part of the temperature variation superimposed on the general warming trend of the last 60 years can be explained by ENSO variability. This can be seen in the figure below (from Module 2).
During an El Niño event (red parts of the ENSO curve), more water vapor is added to the atmosphere, making it warmer, and during a La Niña event (blue parts of the ENSO curve), less water vapor makes it into the atmosphere, so the global temperature drops. Remember that water vapor is a greenhouse gas, and it also transports heat to the atmosphere when the vapor condenses to form water droplets (clouds and rain).
ENSO has major implications on weather across the globe. For example, drought conditions in 2012 in the Midwest are a direct result of the ENSO cycle. El Niño intervals correspond to increased precipitation in California and eastern South America, due largely to the ability of warm water to hold more moisture than cold water. The warm surface water in the eastern Pacific also increases surface ocean stratification and decreases upwelling offshore Peru and Ecuador. The implications of diminished upwelling are severe for coastal fisheries in these countries. Warm water also increases the occurrence of bleaching in the Caribbean and eastern Pacific corals. The position of the high-pressure region in the Pacific affects the tracks of Pacific cyclones. In the Atlantic, increased wind shear during El Niño intervals inhibits the formation of hurricanes. El Niño intervals also correspond to cooler and wetter winters in Florida and the southeastern US, and warmer and drier winters across the northern tier of the US and in Canada. The La Niña phase corresponds to the opposite conditions from El Niño, most notably an increase in the frequency in Atlantic hurricanes as conditions become more favorable for their formation (i.e., less shear in the tropical Atlantic).
Since records have been kept, the average temperature of the global surface ocean has warmed by more than 1oC. Models suggest that all areas of the surface ocean will warm significantly more than this by 2100, especially in the high latitudes. This warming will have major implications for a number of key processes in the oceans, some of which will impact climate and weather patterns on land. In this section, we discuss the impact of climate change on the intensity and frequency of hurricanes, the potential shutdown of the global conveyor belt, the intensity of the ENSO cycle and the occurrence of oceanic hypoxia.
Many of you learned about the devastating force of hurricanes when Katrina hit New Orleans in August 2005. In fact, there have been a number of devastating hurricanes over the last few decades with Hurricane Hugo (Charleston) in 1989, Hurricane Andrew (South Florida) in 1992, and Typhoon Yasi in the Pacific (Australia) in 2011. Atlantic Hurricanes and Cyclones in the Pacific Ocean get their energy from the warm tropical ocean, so wouldn’t it follow that a warmer ocean will fuel more deadly hurricanes? Well, it’s not quite that simple and is more than a little controversial.
Early in spring, Bill Gray from Colorado State University puts out an annual hurricane forecast for the late summer and fall Atlantic Hurricane season. Gray’s forecast is based on a number of factors including sea surface temperatures in the tropical Atlantic, a number of atmospheric parameters, and the stage of the ENSO cycle. While hurricane experts agree about the feasibility of season forecasting, there is much less consensus among them about the effect of climate change on storms. Part of the problem is that coupled climate-ocean models do not have the resolution to forecast storms with a great deal of accuracy.
Since 1980 there has been a marked increase in the SSTs in the tropical Atlantic from August to October, the interval when hurricanes are formed, and the maximum power dissipation of hurricanes, which is the product of the summed maximum wind speed cubed and its duration. In fact, the agreement between these parameters is exceptional. Yet over the same time period, there is no observed increase in the frequency of hurricanes. In fact, when climate modelers downscale models of future climate to attempt to observe predictions for smaller regions than models typically yield, the result is a prediction of increased hurricane intensity but actually fewer storms in the future.
As we have seen earlier, the North Atlantic Ocean is the central driver of the global conveyor. Changes in density of surface waters in the Norwegian Sea as a result of warming, increased precipitation and melting of Greenland Ice can impact the flow of water downwelling as NADW. Sufficient changes in density could potentially shut down the production of NADW and literally halt the global conveyor. This, in turn, would have major implications for global climate.
You might find it hard to believe that such a massive engine of water and heat transport could possibly shut down. However, it has happened numerous times in the past, and this is perhaps the strongest evidence that it could happen in the future. Moreover, the past shutdowns have occurred over very short time intervals. Here, we will summarize our current understanding of the likelihood of GCB shutdown.
Approximately 8200 years ago, in the midst of the gradually warming climate of the current interglacial, a very abrupt cool period occurred that was likely the result of rapid melting of the Northern Hemisphere ice sheet and the flow of massive amounts of freshwater into the North Atlantic Ocean. The event was about 10-15 years long---a geologic heartbeat---and was likely due to a massive breach of the meltwater over a barrier in the St Lawrence Seaway. The impact of the flood of freshwater across the surface of the North Atlantic Ocean was to decrease the density of the water to such an extent that downwelling virtually stalled. The switching off of thermohaline circulation drastically slowed oceanic heat transport, which in turn led to rapid cooling in the Northern Hemisphere, especially in Northern Europe.
Moving on to the present day, flow of NADW is likely weakening due to increasing SSTs and freshening of the North Atlantic. Freshening is the result of melting of Greenland ice, as well as increased moisture transport in a warmer atmosphere. However, detection of a slowdown is very difficult as the meridional overturning circulation is highly variable, with a yearly average of 18.7 Sverdrup (a Sverdrup is one million m3 per second), but a range of 4.4 to 35.3 Sverdrup. Weakening is likely to continue into the future, also as a result of increased precipitation in northern high latitudes. However, most coupled ocean-models suggest that under realistic emission scenarios, thermohaline circulation will continue to slow but not shut down. The consensus of model runs suggests reductions of 10-50% in meridional overturning circulation intensity in the year 2100. However, these predictions are somewhat difficult because the relationships between many of the variables are not linear, rather there appear to be thresholds or tipping points that make prediction difficult.
What does this weakening of the NADW mean for the climate? Most ocean models show that as it weakens, less of the Gulf Stream warmth travels to high latitudes. Today, this warmth accounts for something like 30% of all the heat supplied to the Arctic region, so as the NADW weakens, models predict less heat moving into the Arctic. This might lessen the expected dramatic warming in the Arctic, but if it slows too much, there is a chance that the Arctic could become much colder — and very rapidly — as it did during the Younger Dryas event about 11 thousand years ago.
The connection between climate change and changes in the duration and intensity of ENSO events has been hotly debated as the relationship of extreme weather events (often connected to ENSO) and climate change has been debated. Are extreme events getting more extreme as a result of climate change or is this just a part of the normal ENSO variability? Examples include major blizzards in the northeastern US during La Niña years and intense “Pineapple Express” rainfall in California during the El Niño stage of the ENSO cycle. As easy as it is to attribute such extreme events to climate change, reality is not nearly that simple, as these events fall under weather and not climate and much longer-term changes are required before an empirical connection can be made with certainty.
Models do not agree on the effects of warming on the ENSO cycle and so we will have to wait to see what happens in the future.
Another great threat to marine environments is hypoxia, a condition in which seawater or freshwater becomes deficient in oxygen. As with harmful algal blooms, the main culprit for hypoxia is nitrification from agriculture or pollution. This oxygen deficiency can cause massive fish kills that can devastate coastal economies. Since warm waters can hold less oxygen than cold waters, climate change will render coastal regions more prone to hypoxia in the future. Hypoxia tends to occur when waters become increasingly stratified, which is gradually occurring in many coastal areas.
The northern Gulf of Mexico is a region where a seasonal dead zone is now well established. This dead zone was first noted in the 1970s and has gradually grown in size and intensity since. The critical driver for hypoxia is the outflow of nutrients from the Mississippi River. The Mississippi drains an enormous agricultural region of the mid-continent and thus acquires a very high level of nutrients. These spur production by algae in the surface ocean, which in turn rains down and utilizes oxygen in the deeper water column. The dead zone appears in spring as nutrient loads increase and peaks in late summer as oxygen becomes steadily depleted, exacerbated by the fact that warmer waters hold less oxygen than colder waters. Other regions with prominent dead zones include the Chesapeake Bay and Long Island Sound in the US and the Baltic Sea in Northern Europe. These areas will be more prone to hypoxia in the future.
The following video gives an overview of how hypoxia develops in the Gulf of Mexico.
In this module, you should have grasped the following concepts:
You should have read the contents of this module carefully, completed and submitted any labs, the Yellowdig Entry and Reply, and taken the Module Quiz. If you have not done so already, please do so before moving on to the next module. Incomplete assignments will negatively impact your final grade.
There is one definite advantage of being over the age of forty. If you were lucky enough as a kid, you had the opportunity to snorkel or dive in a place like the Florida Keys or the Bahamas when coral reefs were healthy and diverse, colorful, and teeming with life. There are very few places on Earth where this is the case today. Tragically, the large majority of reefs are in a profound state of decline, with limited growth of new coral, and overgrowth by slimy green or brown films of algae. Piles of coral debris are commonplace. The water is often muddy, and the plethora of life has departed. Reefs are no longer magical places. There is absolutely no argument about it, the decline in coral reefs is human inflicted. To this point, the largest culprit is probably pollution, but that will change in coming decades. Enter Ocean Acidification, a process that has already begun, tied to our excessive CO2 emissions, and one that will accelerate soon. Corals cannot grow once the pH drops below a certain level, and if we don't act fast, that level will approach by mid-century. Reefs have survived the two largest mass extinctions the Earth has faced, but they may not survive the mass extinction humans are causing.
Ecosystems, such as reefs, are governed by a delicate balance of interactions between animals and plants. Yin and Yang. If that balance is upset even slightly and one part of the ecosystem is favored at the expense of others, havoc can break out. The recent surge in harmful algal blooms along many coasts and the outbreaks of massive jellyfish in the western Pacific Ocean are signals of Yin or Yang, but not both. Our oceans are getting out of whack.
In the latter part of Module 6, we learned about changes in circulation and health of the oceans that are predicted to occur with climate change. In this module and Modules 8-11, we continue to address the impacts of climate change on natural systems. These changes are very much central to the issue of sustainability of the planet and its populations. "Sustainability" has a variety of definitions, and, in particular, the meaning for environmentalists is substantially different from the meaning for businesses. Regardless of where you might be coming from, sustainability means the preservation of society and our way of life. Most directly, we are concerned with maintaining the needs of people today and in the future, and this very much hinges, as we will see in this module, on sustaining the life support systems of the planet.
Global warming and an array of environmental changes resulting from human activities are already causing profound impacts on organisms across the spectrum of the marine food chain. Warming of the ocean and subtle changes in its chemistry are combining with pollution and overfishing to alter the habitat of many marine creatures. In the near future, these habitats look to be further impacted, and potentially destroyed, with possibly devastating biological and economic consequences, including very negative impacts on people. The goal of this module is to learn about three very different but equally significant impacts of climate change and human activity on life in the ocean: ocean acidification, red tides, and blooms of jellyfish.
To set the stage, watch the Award Winning video Sea Change, produced by Craig Welch.
On completing this module, students are expected to be able to:
After completing this module, students should be able to understand the following topics:
Below is an overview of your assignments for this module. The list is intended to prepare you for the module and help you to plan your time.
Assignment | Location | |
---|---|---|
To Do |
|
|
Increasing levels of CO2 in the atmosphere are slowly causing the surface of the ocean to become more acidic. This is because the ocean absorbs some of the CO2, forming a weak carbonic acid. At present, the ocean absorbs about a third of fossil fuel emissions, but this amount is likely to increase to 90% in the future. Over the last century, the average pH of the ocean has decreased, and there are hints that the current levels are beginning to impact organisms that make their shells out of the minerals aragonite and calcite (both composed of CaCO3). Aragonite is more susceptible to dissolution in more acidifc waters than calcite. Coral reefs that are made of the mineral aragonite and are particularly vulnerable to ocean acidification. A recent study has found, for example, that the area of coral covering the Great Barrier Reef in Australia has been cut in half since 1985. However, coccolithophores and foraminifera, organisms that serve a vital role at the base of the marine food chain that are composed of calcite, are becoming increasingly susceptible. Moreover, the future appears to be even more bleak; some CO2 projections suggest by the year 2100 there will be a 150% increase in the ocean’s acidity compared to preindustrial times. Here we review the chemical changes in seawater that result from increasing CO2, and then we discuss the impact on reefs and planktonic organisms in the ocean. Finally, we discuss the evidence for acidification in ancient oceans and its impact on life in the past.
The following video provides a thorough overview of the potential impact of acidification on the oceans.
The ocean contains a massive reservoir of dissolved CO2, hundreds of times more than in the atmosphere, and, actually, by contrast, the amount derived from fossil fuel burning is relatively modest. Since the beginning of the industrial revolution, about 340 to 420 petagrams carbon (a petagram or Pg is 1015 grams) in the form of CO2 has been emitted to the atmosphere, with about a third of that amount absorbed by the ocean, approximately 118 Pg. Seawater today may already contain more CO2 than at any time in many millions of years.
As we discussed in Module 5 on the Carbon Cycle, the absorption of CO2 in the ocean forms weak carbonic acid (H2CO3). Some of this acid dissociates in seawater releasing H+ ions, which make the water more acidic, as well as HCO3- (bicarbonate ions) and CO32- (carbonate ions). This reaction is as follows:
CO2(aq) + H2O = H2CO3 = H+ + HCO3- = 2 H+ + CO32-
Going back to your elementary chemistry course, you might remember that a pH of greater than 7 is regarded as alkaline whereas a pH of less than 7 is acidic. Surface ocean waters have a pH of between about 7.9 and 8.3, which means that they are, by definition, alkaline. Anthropogenic CO2 is thought to have decreased the mean pH of the ocean by 0.1 unit since 1800. This may not sound like that much, but more ominous is the projection that if CO2 levels continue to rise unabated (i.e., projections based on SRES A2 “business as usual”, pH levels will drop a further 0.3 by 2100. As we will see below, in parts of the ocean, these levels would be extremely damaging to organisms that build their skeletons out of CaCO3, which is very sensitive to CO2 addition.
CaCO3 is the dominant material used by invertebrate organisms to build their skeletons. There are two different minerals made of CaCO3, known as polymorphs: calcite and aragonite. These minerals have the same composition but different crystal lattice structure and thus their properties and behavior in seawater differ, including their ability to dissolve. To understand how CaCO3 dissolves and precipitates, we need to introduce a term Ω that represents the saturation state of the water. Where waters are highly saturated with respect to CaCO3 and Ω is high, calcite and aragonite are less likely to dissolve than where these waters are less saturated or even undersaturated and Ω is low. Likewise, calcite and aragonite are more likely to precipitate under higher Ω values. The dissolution and precipitation reactions are as follows:
Dissolution reaction: CaCO3 (solid) = Ca2+ + CO32-
Precipitation reaction: Ca2+ + CO32- = CaCO3 (solid)
An increase in CO2 from the atmosphere presents a double whammy for skeletons formed from CaCO3, both aragonite and calcite. The H+ ions and carbonate ions (CO32-) that derive from the dissociation of carbonic acid combine to form bicarbonate ions (HCO3-). This rapid reduction in available carbonate ions decreases Ω and limits calcification by organisms with aragonite- and calcite-based skeletons. However, here we need to dispel two myths. The first myth is that the precipitation of CaCO3 is directly controlled by pH. In fact, precipitation is affected principally by the decrease in CO32, which is coincident with the addition of H+ ions, and reduction in pH. The second myth is that precipitation of CaCO3 can occur in any water that is oversaturated with respect to the particular CaCO3 mineral. In fact, both corals and coccolithophores have been shown to have difficulty calcifying in environments when waters were actually oversaturated. Different organisms can calcify at very different Ω values, but for most the decrease in saturation that results from decreasing CO32- content is a direct threat to calcification. Fiinally, it is key to move that aragonite is more susceptible to dissolution than calcite. Thus, shells made of the CaCO3 polymorph aragonite, including the corals, will be the first to dissolve, followed by those made of the polymorph calcite.
The following video explains the threat of ocean acidification to the calcareous plankton.
The saturation of CaCO3 in the oceans is also a function of temperature and pressure. A delicate balance exists between the production of CaCO3 via the formation of skeletons in the shallow part of the ocean and the dissolution of this aragonite and calcite in the colder and deeper realms of the ocean where waters are less saturated. In most parts of the ocean, undersaturation occurs far below the surface. However, recent increase in dissolved CO2 is leading to a shoaling of the saturation horizon of CaCO3, and, in the future, this will impact especially the organisms that live at depth or in colder waters as well as those that make their shells of the mineral aragonite, which is more soluble in seawater than calcite.
Coral reefs have existed for hundreds of millions of years and provided a habitat for some of the richest diversity on the Earth’s surface. They are the marine version of tropical rainforests. Reefs harbor a slice of the marine food chain, all the way from tiny autotrophic protistans (autotrophs fix carbon through photosynthesis) to large, predatory fish. Hundreds of millions of humans live near reefs and receive important resources from them. Reefs host productive fisheries; they also provide protection to low-lying coastal areas from storms and are vital for a number of key habitats, including mangrove forests.
The organisms that have constructed reefs, largely corals, have evolved over time, and with that change so have the locations of reefs and the dynamics of the reef community changed. Over their long history, reefs have had several intervals of crisis; in particular, they almost ceased to exist at the Permian-Triassic boundary, where over 90% of marine species became extinct, and during the Cretaceous about 100 million years ago when giant clams took over these structures for several tens of millions of years. Both of these ancient times were potentially characterized by ocean acidification. However, reefs have been remarkably resilient over geologic time and generally have been able to adapt to environmental change. For example, as we will see in Module 10, they are able to grow fast enough to keep up with very rapid rates of sea level rise.
With this background, recent human activity has placed reefs in as precarious a position as at almost any time in their history. The last fifty years have witnessed an extremely dramatic decline in the health of many of the major reefs around the world, including reefs of the Caribbean, the Bahamas, and the Florida Keys as well as those in the Indian and Pacific Oceans, including the massive Great Barrier Reef of Australia. The outlook for these rich and complex ecosystems is about as bleak as any ecosystem on Earth. As it turns out, ocean acidification is one of several environmental threats to reefs, with warming, pollution, overfishing and physical destruction all exerting major threats to reefs in the future. As we will see, acidification is perhaps the greatest of all of these threats long term.
Before we start, let's consider how reefs grow. The main organism that constructs modern reefs, the coral, includes a number of species belonging to the Cnidaria, a phylum of organisms that uses stinging cells to capture their prey. Modern corals are colonial structures of millions of individual polyps that grow primarily in shallow and clear tropical and subtropical waters, restricted to these areas by light levels and temperatures as well as by nutrients.
Both types of coral reproduce both sexually and asexually. Asexual reproduction involves simple cell division, or budding, and takes place within the colony, whereas sexual reproduction involves the release of gametes into seawater. This is an amazing process that for many species happens once a year, timed by the lunar cycle. The fertilized egg forms a larval planula that settles before forming a new colony.
Large reef structures including fringing and barrier reefs, as well as atolls, represent the growth of these colonies over many thousands or millions of years.
The algae live within the coral polyp and receive CO2 from the polyp that it requires for photosynthesis. Through photosynthesis, the Zooxanthellae convert the CO2 to O2 and provide this vital gas to the coral as well as crucial nutrients. This is important because by removing CO2, the Zooxanthellae drives up saturation which facilitates calcification in the coral skeleton. Reefs are made up of much more than corals and their algal symbionts. Other organisms, including coralline algae (algae that secrete high magnesium calcite and conduct photosynthesis on their own), are active framework builders in modern reefs. Today, sponges, sea anemones, sea urchins, a diverse array of fish, and many other organisms live within reefs, some playing a vital role in building reefs and keeping them healthy and others taking advantage of their decline.
Although, as we will see, ocean acidification is only just beginning to affect reef growth, there is already another very serious affliction in many reefs around the world---coral bleaching. One of the key changes in corals over the last few decades has been an increase in the frequency and severity of bleaching events. As we have discussed, shallow-water coral species live in symbiosis with algae called a dinoflagellate or Zooxanthellae whose colorful pigments give living corals such beautiful colors, purple, brown, green, yellow, and more. As discussed the symbiosis is key to the coral, the Zooxanthellae removed CO2 from the water, boosts the saturation, and facilitates calcification of the coral. When temperatures become too hot, however, the Zooxanthellae can’t tolerate the heat and it temporarily or permanently leaves the coral leading to a bleached white color.
Bleaching today is common when temperatures increase very slightly (by about 1°C) in summer months and thus is often an annual summer event. Bleaching is accompanied by slower growth and increased coral mortality. The response to bleaching differs considerably between different species. Some species are able to recover normal growth rates quite rapidly, whereas others are more vulnerable and unable to recover. Bleaching also changes the ecology of a reef, promoting the growth of algae that blanket corals, making the recovery of corals more difficult.
A major coral bleaching event took place in the Florida Keys in summer 2023 when water temperatures rose to 101 deg F and there was widespread bleaching. It is too soon to determine the long-term impacts of this bleaching on the reefs in the Keys but the images are dramatic and the future does not look promising.
One of the most significant bleaching events took place in 2016 and 2017 in the Great Barrier Reef. This catastrophic event caused by warmer than average water caused up to 50% coral mortality in some areas. Bleaching was also extreme in 2020 and you can see from the maps below that it extended southward which in the southern hemisphere means it afflicted cooler waters. The extent of bleaching shocked ecologists and is clearly a window into the future. Now it seems every year, bleaching is a threat to the Great Barrier Reef, and with the warm El Nino conditions, 2023 is shaping up to be no exception.
Not all corals live in shallow water. A very diverse group of corals are able to thrive in the deep realms of the oceans. These corals lack Zooxanthellae and are able to calcify at cooler temperatures and lower levels of saturation than exist in the shallower waters. The number of known deep-water species has increased dramatically with new methods of exploring the deep oceans, including ROVs (remotely operated vehicles) and AUVs (autonomous underwater vehicles).
These organisms exist in isolated patches at depths down to 2000 meters. In these conditions, the corals grow considerably slower than shallow-water reefs. They feed on zooplankton and in some cases use chemicals coming out of the sea floor for a source of nutrition. Like their shallow-water counterparts, deep coral reefs provide a habitat for a diverse array of creatures. However, at the same time, it is clear that deep corals are in a particularly precarious situation, as we will see later on.
The growth rate of corals, known as the rate of calcification, corresponds more closely to aragonite Ω rather than any other variable. Cores of corals from the Great Barrier Reef indicate a 14% decrease in calcification rates from 1990 to 2005. Corals generally maintain a high Ω at the site of calcification, but with decreasing CO32-, precipitation of aragonite requires more energy, hence the decrease in calcification rate. Overall, the net calcification rates of reefs decrease to zero when the Ω of aragonite is <1, and thus we can predict that worldwide reefs will switch from actively growing via calcification to shrinking via dissolution when CO2 doubles to 560 ppm (the year 2050 in emission scenario A1B). The same logic leads to predictions of decreases in calcification rates of 10-50% by 2050. Experiments show that modern coral species have very different abilities to grow in water with lower saturation, with some species able to continue growing while others can't.
Experiments are one of the best ways to forecast the future, but they have significant limitations. In particular, it is impossible to replicate natural growing conditions in the lab, and further, experiments are conducted at intervals that are significantly shorter than the changes that are occurring in nature.
Field studies and experiments often show very different results. Strangely, field results indicate a much higher susceptibility of growth rates to decreasing levels of saturation than do laboratory experiments. These studies suggest that there is much left to learn about calcification and how it will change in the future. At the same time, it is clear that there is significant variability between species; some species having a much greater likelihood of survival in the future. Moreover, even within individual species, it is apparent that some reefs likely have the ability to survive better than others. Factors such as community composition, growth rate, nutrient levels, and local variations in seawater chemistry and sediment composition will also play a vital role in determining which reefs will survive.
Since the coralline algae are more tolerant of colder waters, they have a very widespread distribution in the oceans, extending from the tropics to polar regions. The algae are also able to use low light levels for photosynthesis and therefore live considerably deeper than most corals. Their distribution suggests that they have great potential to adapt to variable environments. As it turns out, however, they may be in a more precarious position than the corals.
The recent decrease in CO32- has also begun to lower calcification rates of the coralline algae. These species are composed of high Mg calcite, which is the most soluble form of CaCO3 (more so than low Mg calcite and aragonite), so they are particularly prone to ocean acidification. Experimental work confirms that calcification of the coralline algae is particularly sensitive to CO2 levels with growth rates slowing significantly, and actually, dissolution beginning at moderately high CO2 contents.
Perhaps the coralline algae will be the “canary in the coal mine” for the dissolution of all framework structures under rising CO2. For the corals, it is very apparent that the threat varies considerably from reef to reef. Deep-water corals are in particular jeopardy because calcification rates are lower in colder waters, because saturation levels at these depths are lower, and because, in the future, saturation stands to decrease more rapidly in deep than in shallow waters. However, the fate of reefs in the tropics is also likely to vary significantly.
Making predictions about the exact impact of ocean acidification in the future of coral reefs of all types is inherently difficult. Acidification will occur in parallel with other deleterious effects such as bleaching and sea level rise. Species that are most susceptible to the effects of bleaching may turn out to also be more susceptible to extinction. For example, by impacting zooxanthellae, bleaching impacts calcification and thus may exacerbate the impact of acidification. However, these same corals also have the ability to recover from bleaching events more quickly, possess shorter generation times, and thus may have the ability to evolve more rapidly to tolerate bleaching in the future. At the same time, nutrient levels in reef environments have and will continue to decrease as stratification of the upper ocean increases, and this will put considerable strain on future reefs. Temperatures, nutrient levels, local rates of sea level rise, and human activities are factors that will likely decide which reefs are the most vulnerable, possibly with acidification pushing the most threatened systems over the edge into extinction.
Probably the largest uncertainty with respect to the future of reefs is whether corals will be able to adapt to lower saturation levels, increased temperatures, and lower nutrient levels, and if so, how rapidly. As we have seen earlier, corals have shown great resilience in surviving numerous environmental threats in the past. So, why do reefs recovered after times of environmental stress in the past, at times when temperatures were even warmer than they are today, and CO2 levels higher, appear to be in such dangerous territory today? The answer to this question is that in the past, temperature and CO2 perturbations occurred slowly enough that ultimate increases in weathering (remember feedbacks we discussed in Module 3) essentially decreased the rate of CO2 addition and buffered the ocean with CO32- before extinction occurred. What concerns scientists is that warming today and CO2 addition are rapid enough that the weathering feedback will lag the decrease in saturation by several thousand years.
So, we are left with a lot of questions: will the rates of saturation decrease and temperature rise be too rapid for modern species to adapt? Will algae take over the niche of shallow-water corals and dominate the low pH oceans of the future? Or will a few species of coral and possibly coralline algae develop the ability to calcify rapidly enough to survive the current threats and take over the niche of species that do not? Will Zooxanthellae themselves be able to adapt and assist corals in calcifying? Finally, when will feedbacks, largely through weathering, come into play and make conditions more favorable for calcifying organisms? For some of these questions, it’s a matter of wait and see. However, ongoing research should shed light on others. For example, the genetics of coral populations are currently being explored to understand the ability of corals to adapt to environmental change.
At this stage, however, any outcome is possible for corals, ranging from complete extinction by late in the 21st century to the adaptation to a new set of environmental parameters. Ultimately, like many elements of the ecosystem, the fate of reefs may rest on how well we manage CO2 emissions in the future.
The goal of this lab is to:
First, you will be watching videos from the Catlin Seaview Survey as well as photos to learn how to determine the health of reefs. Then you will be looking at before and after photos of reef bleaching events. Finally, you will be looking at the risk to reefs in the future.
Please watch the videos below. It is easiest to watch the videos in your browser (click button top right of Google Earth) in full screen mode. To move around, insert your cursor in the videos to manipulate the camera and stop on particular items of interest or to change direction. The best way to move the camera is with your keyboard arrows (side to side) and cursor for up and down.
The goal of this part of the lab is for you to show you can identify different types of coral as well as overall health of the reef at different locations. Make sure you have read the material on reefs in the module before attempting to complete the lab. Below are the different types of coral for you to identify. In addition, we show pictures of algae that colonize reefs as well as reef damage from storms. Healthy corals show a variety of colors from the different algal symbionts. Unhealthy corals show fewer colors, more algal colonization, more breakage and often are bleached white. Remember, algae are some of the key markers of an unhealthy reef.
The following images show some of the range of morphologies and colors of colonizing algae
ReefsAllFin.kmz [209]
Load the ReefsAllFin.kmz file [210]. The locations of reefs of interest are shown with flags. The videos are shown with diver markers, the still photographs are shown with wave markers. The videos run best if you open them a browser such as Firefox (see Google Earth window, top right). Before submitting your lab, let’s begin with the practice. Make sure you do this part of the lab to get comfortable with the tasks you will be asked to do and to receive feedback about your answers. Watch the videos at the following locations and answer the questions below. Make sure you maneuver up and (especially) down, as well as side-to-side.
In this section, you will be looking at the health of reefs using their color, the presence and abundance of algal overgrowth and the and the presence of bleaching. Go to the Belize reef (click on the flag in Google Earth, then open the address in your browser). Look around the reef and answer the following questions.
In this part of the lab, you will compare photographs from before and after major events that have impacted reefs. Answer the questions about the changes in the abundance of different types of coral, algal overgrowth, or percent bleaching.
Go to Lizard Island. Please look at the two pictures and answer the following questions.
Corals and coralline algae are not the only organisms highly susceptible to ocean acidification. Coccolithophores, foraminifera, pteropods, three very different groups of plankton (a term that refers to organisms that float passively in the upper ocean) are also threatened by increasing atmospheric CO2 levels. Pteropods, often called sea butterflies, are tiny snails made of aragonite that thrive in shallow waters and play a particularly important role in polar ecosystems.
Because of the susceptibility of aragonite to decreasing saturation levels combined with the effect of temperature (calcification requires more energy in cold water) on saturation, pteropods may cease to exist in polar latitudes by the middle of the 21st century. In some polar areas, these organisms account for over 60% of the zooplankton biomass, thus their extinction or migration to warmer regions could have major repercussions to organisms up the food chain.
The coccolithophores and foraminifera are constructed of low-Mg calcite and therefore are more stable than pteropods under conditions of increasing CO2.
As we have learned earlier, foraminifera are groups of zooplankton with habitats both near the surface of the ocean (planktonic foraminifera) and on the ocean bottom (benthic foraminifera).
The planktonics are sensitive to changes in CO2, have shown subtle changes in shell mass during the Pleistocene in response to CO2 fluctuations, and have already begun to get lighter in response to recent CO2 increase. This recent thinning of shells is likely a result of the effect of decreasing CO32- on calcification by foraminifera. Interestingly, many planktonic species also harbor dinoflagellate symbionts that play a role in calcification. Changes in calcification appear to be a threat to the planktonic foraminifera, but we do not yet know how serious this threat is. However, we do know that the foraminifera are also threatened by the potential loss of their dominant source of food, the coccolithophores, as a direct result of CO2 addition.
Coccolithophores are ubiquitous in the oceans, essentially serving as the dominant species of phytoplankton in vast regions of the open ocean that are characterized by lower nutrient levels. For this reason, considerable attention has been devoted to the potential effects of increasing CO2 on the coccolithophores.
Coccolithophores are haptophyte or golden brown algae (similar to diatoms) that produce tiny calcite scales known as coccoliths during certain phases in their life cycle.
The plates are several microns in diameter and can remain attached to the cell covering the soft organelles with a protective shield or break off after they are fully grown. Coccolithophores reproduce asexually, and, given the right conditions have the potential to multiply rapidly. When conditions are suitable, coccolithophores can form blooms of millions of cells per liter of seawater. These organisms are consumed by foraminifera and copepods, and transported to the bottom of the ocean as marine snow.
Because of their prolific production and global distribution, coccolithophores are a vital part of the carbon and carbonate cycle of the oceans. Modern coccolithophorids are dominated by the species Emiliania huxleyi, a species with very small (1-2 micron) and delicate coccoliths. Due to their size and ecology, coccolithophores are inherently more difficult to study in the natural environment than are corals. However, samples can be collected using filters and cores of sediments can be studied to determine the effects of past changes of CO2 on coccolithophore morphology. Coccolithophores can also be cultured in the laboratory where CO2, CO32- and pH levels can be altered to observe the effect of these variables on calcification. Although the results of both field and lab studies are by no means simple, it appears that in the case of E. huxleyi, increasing CO2 and decreasing CO32- has the effect of causing thinner coccoliths with smaller masses. In addition, laboratory studies show that such conditions also lead to malformations in E. huxleyi coccoliths, which are a potential sign of difficulty in calcification. Such trends of decreasing mass are by no means universal, however. Some lab experiments and field collections in particular environments show that species other than E. huxleyi actually grow thicker in low CO32- conditions and even E. huxleyi recently has increased in mass in parts of the ocean. More study is required to determine the precise outcome of the coccolithophores in a high CO2 world. However, the current signs generally point to significant reduction in the rate of calcification, which could lead to significant changes both in marine ecosystems and in the carbon cycle.
Coccolithophores have a spectacular 220 million year fossil record. This record allows paleontologists to observe the effects of past climate change, including increasing CO2, on the livelihood of this group of plankton. Ancient global warming events, including those at 120 million years before present and 55 million years before present, have been gleaned for evidence of ocean acidification in the morphology of coccolithophores. The event at 120 million years shows evidence for a decrease in coccolith size, but this change is not apparent in the 55 million year event. As the species existing during these ancient events was entirely different from those at the present time, it could be that ancient species responded in a different fashion from those living today. Alternatively, the changes in ancient plankton assemblages may be in response to environmental variables other than CO32-, for example, temperature and nutrients. The coccolithophores, like the corals, have been able to survive intervals of great ecological upheaval in the past. However, given that the rates of modern environmental change are so rapid compared to ancient events, we cannot assume that these same groups will have the ability to adapt to the changes that are to come. Moreover, while the coccolithophores appear to be less endangered by increasing CO2 than the corals, largely as a result of their mineralogy, the impact of the decline of these vital algae would likely be even more devastating to the oceans.
Red tides are common events in warm and polluted coastal oceans. They form when dinoflagellate algae explode to huge population levels. Because the dinoflagellates have red plastids, the waters literally turn red. Dinoflagellates take advantage of harsh environmental conditions that kill off other organisms. As you will find out in the pages to follow, these tides can be major public health hazards.
I remember that the grouper was delicious. Blackened, spicy, with an ear of corn and some slaw. On a white paper plate. I polished it off with a Heineken. It was a warm early January day, a seafood festival in Florida City. We went canoeing after lunch. I remember waking up the next morning with an intense thirst and extreme nausea. When I tried to get out of bed, I sprawled on the floor, my left side was completely paralyzed! I rolled to the sink, crawled up, and poured myself a glass of water, I gulped it down, and to my horror, it felt boiling hot! The neurological symptoms soon went away, but nausea and fatigue lasted weeks, I lost at least ten pounds. The doctors took that long to diagnose the problem---I had Ciguatera poisoning. That delicious grouper had ingested a lot of toxic dinoflagellates and transmitted the toxins to me. My first experience with harmful algae.
Red tides represent one of the most serious threats to coastal ecosystems today. Most red tides result from the input of an excessive amount of nutrients from fertilizers, sewage, and soils of nearby land areas to bays, estuaries, and shallow seas. These nutrients cause explosive growth of microscopic species of algae, a number of which carry toxins that are harmful or even lethal to other organisms. Red tides can be caused by major storms such as hurricanes, which cause excess runoff from the land and resuspension of the seed stages of the algae.
Red tides occur when dinoflagellates, and rarely diatoms, grow in massive quantities in surface waters. The photosynthetic organelles of these organisms, known as plastids or chloroplasts, are red, or golden brown in the case of diatoms, and the profusion of cells in surface waters imparts a red or brown color. Some of the culprit dinoflagellate and diatom species produce by-products that are highly toxic to many other organisms living in the coastal zone, all the way from fish to turtles to large mammals such as dolphins, manatees, and whales, as well as to humans. In some cases, shellfish or small fish such as sardines that consume the plankton are not harmed by the toxin but concentrate it for organisms that feed on them, a process known as bioaccumulation. Ingestion of the toxins can result in developmental, immunological, neurological, and reproductive damage of the host organism. For this reason, red tides are also known as harmful algal blooms (HABs). In fact, we will use the term HAB here because these events are not associated with ocean tides, because many HABs are not associated with a red color, and because blooms of dinoflagellates are often not harmful.
Harmful algal blooms are a global phenomenon and have increased in frequency in the last thirty years. Part of the increase may result from awareness of the phenomenon, but increasing pollution is also considered responsible. There are a number of reasons why climate change may further increase the occurrence of HABs. For examples, the increase in the frequency of large storms such as hurricanes will lead to greater runoff and input of nutrients from land. In addition, changes in temperature, wind patterns, upwelling, and stratification will alter the distribution of species. Because the degradation of a large amount of cellular material produced in HABs consumes oxygen, HABs can result in hypoxic or anoxic conditions.
Dinoflagellates are a group of microscopic single-celled organisms or protists that are dominantly autotrophic (i.e., primary producers). Interestingly, many species are also mixotrophic, having the ability to ingest their prey as a source of energy. Some species are entirely heterotrophic, lacking chloroplasts or plastids, and have been termed carnivorous. In fact, there have been reports in the scientific literature that some species have the ability to consume fish after having paralyzed them with neurotoxins. These claims are controversial but have given dinoflagellates a near-mythical reputation among the oceanic plankton.
Diatoms and dinoflagellates are most common in the coastal oceans but also have the ability to live in freshwater environments and in intermediate salinity environments where fresh and marine waters mix in estuaries. They are the most prolific group of primary producers in the ocean. Dinoflagellates have a highly complex life cycle that consists of an alternation between a motile stage and a resting or cyst stage. In short, dinoflagellates enter the resting stage via sexual reproduction when conditions in the surface ocean are not suitable for them to thrive. They can remain dormant for weeks, months or years before they “excyst,” when surface conditions improve, reproduce vegetatively, and populate the surface ocean. Excystment and repopulation are triggered by changes in temperature, light, or oxygen levels or even resuspension of cysts by storms. Dinoflagellates generally thrive when nutrient levels are elevated, and, under conditions of extremely high nutrient levels, cell division can be so rapid that extremely high cell counts (millions of cells per milliliter of seawater) are reached, resulting in red tides. The cyst stage acts as a very effective mechanism for seeding blooms.
The following video summarizes the life cycle of the dinoflagellates.
Dinoflagellates have a broad range of different ecologies. As we saw earlier in the module, they can be endosymbionts of corals, facilitating calcification in the host colony. They have this same role in foraminifera and radiolarian, a group of siliceous zooplankton.
Diatoms are autotrophic protists that produce a delicate, microscopic test of opaline silica.
They are non-motile, and, for most of their life cycle, they reproduce asexually. Many nearshore diatom species also have a resting stage akin to the dinoflagellates, allowing them to exit the surface zone when conditions are unfavorable for their growth. This may occur in winter when surface waters are cold or at times when nutrients are depleted. The resting spore stage actually may resemble the vegetative stage of dinoflagellates. Like the dinoflagellates, diatoms are able to reproduce extremely rapidly by simple cell division, and this allows them to rapidly dominate the surface ocean when nutrients are readily available.
Species that are harmful belong to the pennate diatoms that are long and thread-like and have the ability to attach to a host, although the relationship is not symbiotic. Both dinoflagellates and diatoms cysts can move around the oceans by currents, storms, dredging of the ocean bottom, and when cysts act as ballast on ships or even higher-level organisms. Toxins are not known in the cyst stage of either group.
Of some 60 or so species that cause red tides, only a handful is known to be toxic. Dominant dinoflagellate HAB genera include Alexandrium, Karenia, and Pfiesteria. The diatom genus most commonly associated with HABs is Pseudo-nitzschia. Each of these genera produces a different toxin and thus has a different role on organisms further up the food chain. Next, we discuss some of the HAB species in detail.
Alexandrium spp. is the dominant taxon in coastal regions of New England and eastern Canada but it is also found from California to Alaska.
It is a heterotrophic dinoflagellate that produces a saxitoxin, one of the most powerful known types of neurotoxins. These toxins destroy the function of nerve cells and can thereby cause paralysis. Saxitoxins are most effectively concentrated by shellfish such as clams, quahogs, mussels, scallops and oysters that filter large volumes of seawater to acquire their nutrition. Although the saxitoxin does not harm these shellfish, even in small quantities, the toxin can be extremely dangerous for humans, resulting in a serious illness known as paralytic shellfish poisoning (PSP). The saxitoxin attacks the human nervous system within 30 minutes of ingestion with symptoms that may include numbness, tingling, weakness, partial paralysis, incoherent speech, and nausea. In severe cases, the toxin can lead to respiratory failure and death within a few hours. Alexandrium spp. toxins have also been harmful to whales, sea otters and birds.
The following videos describe the causes and impacts of red tides as well as possible antidotes for shellfish poisoning.
Karenia is the dominant taxon causing red tides in Florida and Texas, but rarely species of Karenia have also been found up the east coast in North Carolina.
It produces a toxin known as a brevetoxin (named after a species of Karenia, K. brevis). Like saxitoxins, brevetoxins damage nerve cells, leading to disruption of normal neurological processes and causing neurotoxic shellfish poisoning (NSP). In humans, gastrointestinal symptoms and a variety of neurological ailments result, but there are no known fatalities. However, in fish, the brevetoxins attack the central nervous system and cause respiratory failure. Karenia dinoflagellates are responsible for massive fish and bird kills in the Gulf of Mexico. The brevetoxins are colorless, odorless, and heat and acid stable, thus they survive food preparation.
The genus Gambierdiscus lives in tropical waters, usually in reefs, and produces a toxin known as Ciguatoxin that causes gastrointestinal problems followed by mild neurological symptoms. This syndrome is known as Ciguatera fish poisoning. Because the toxin is fat soluble, it gets concentrated up the food chain by bioaccumulation from seaweed to smaller fish than to larger fish. The larger fish, which are the most dangerous to eat because they have the highest toxin concentrations, include commercially available seafood such as grouper, snapper, and barracuda. Ciguatera is responsible for more human illnesses—estimated between 10,000 to 50,000 cases annually—than any other HAB toxin.
Pseudo-nitzschia and the species Nitzschia navis-varingica are common diatom genera in Californian red tides. These taxa produce domoic acid, which is concentrated by filter-feeding shellfish. This neurotoxin can also bioaccumulate in fish such as anchovies that feed directly on the diatoms. Domoic acid causes a variety of gastrointestinal ailments, memory loss and brain damage in humans and is hence referred to as amnesic shellfish poisoning. Rarely, the neurotoxin can be fatal. It can also affect marine mammals, causing seizures.
The species Pfiesteria piscicida and P. shumwayae have been the most common dinoflagellate species in red tides in estuaries and bays along the east coast of the US from Delaware to Florida. The species occur in environments where freshwater and saltwater mix and have not been reported from freshwater environments or the open ocean. Pfiesteria blooms are restricted to summer months. Species have been associated with massive kills of menhaden and other estuarine fish in the Chesapeake Bay and the Tar-Pamlico and Neuse River Estuaries in North Carolina. The fish in contact with Pfiesteria rapidly develop bleeding lesions and have skin actively flake off them, and it has been proposed that the presence of live fish stimulates the production of toxin in the dinoflagellate. Ultimately the open lesions may destroy gill function and lead to death. Nevertheless, the connection between fish mortality and Pfiesteria is still doubted by some scientists. In fact, the effects of Pfiesteria on fish and human health has been one of the largest and nastiest controversies in marine science over the last 25 years and it does not appear that the conflict is anywhere near over.
The following video explains some of the research on Pfiesteria.
At least part of the debate has been fueled by the popular press, who have focused attention on the organism after studies suggested that it was carnivorous. These studies indicated that Pfiesteria species ingested the skin of fish after it flaked off. In fact, lab studies have shown that when fish were left in tanks with Pfiesteria, the fish died within hours. For this to occur, however, the dinoflagellate and fish must be in direct contact. Without contact, the same studies show that fish suffered no ill effects. Some scientists alternatively point to water molds or fungi including the species Aphanomyces invadans as the pathogen cause of the ulcerative lesions, skin loss, and damage to gills. A. invadans and other fungi are universally present in fish with ulcers and skin loss. Also weakening the case for Pfiesteria, this genus is still known to exist in North Carolina estuaries, but fish kills have become less frequent recently. Moreover, where lesions on the fish menhaden were observed, nearby fish including catfish, perch, and carp were unaffected. These disparities have cast some doubt on whether Pfiesteria is harmful to fish at all. In fact, great differences exist among public health professionals, and warnings from different state agencies are in conflict.
The health impact of Pfiesteria on humans is also uncertain, as is the method of transmission of the potential toxin. Scientists working with Pfiesteria in the laboratory have suffered from long-term neurological symptoms, such as memory loss, fatigue, and dermatological problems, and fisherman in contact with Pfiesteria-related fish kills have also suffered from similar ailments. However, other groups of fishermen who have come in close contact with lesion-covered fish have not reported adverse effects. There are reports that the hypothetical Pfiesteria toxin is transmitted via aerosols.
Until recently, the missing link in the Pfiesteria conundrum is that the toxin produced by this organism has been elusive. However, in 2007 scientists in a government lab claimed the first positive identification of a toxin associated with Pfiesteria. Even with this identification, questions remain; for one, the toxin is unstable in the natural environment, and second, it is not been proven to have adverse health effects. Thus, the controversy about Pfiesteria is far from over. In all reality, a number of factors may result in the fish kills; in particular, the fish in estuaries may have already been under great stress from other biological agents (bacteria, viruses, fungi, parasites), exposure to chemicals (pollutants, toxins), suboptimal water quality, and rapid water temperature change, that have the potential to cause lesions to form.
Far less controversial than the relationship of Pfiesteria with fish kills are studies that directly relate fish kills to low oxygen levels caused by algal blooms. In a number of estuaries along the eastern US, warmer waters in summer combined with increased production by algae, as a result of increased runoff and eutrophication, lead to severely decreased oxygen levels and major fish kills without the involvement of a toxin.
Finally, HABs are not always produced by dinoflagellates and diatoms. Cyanobacteria, or blue-green algae, another group of single-celled organisms, but one that is prokaryotic rather than eukaryotic, is also known to produce extremely potent toxins that can cause illness in fish, birds, and mammals including humans. Because of their potential to be harmful, this group is known as CyanoHABs. Cyanobacteria are some of the oldest species on Earth and are known to tolerate very tough conditions including hot, cold, salty waters and darkness. They live under ice sheets, near hydrothermal vents, and were some of the first organisms to colonize the ocedans after the massive asteroid that killed the dinosaurs.
Although the full scale of health effects of CyanoHABs on humans is not yet determined, the toxins may have gastrointestinal, respiratory, allergic, and neurological responses, and potentially lead to liver damage. In addition, prolific growth of cyanobacteria can block sunlight, which can harm other organisms, and use up oxygen which can lead to hypoxia and anoxia. As in the case of the HAB dinoflagellates, the growth of CyanoHABs may be stimulated by nitrogen loading from agricultural industrial runoff, as well as sewage disposal.
As we have seen, there is a great variety in the biology and ecology of HAB species. However, they share one major thing in common: all of them have the ability to wreak havoc on coastal fisheries. Since the growth of most if not all of the species directly responds to nitrogen loading, limiting the harm on fisheries will require significant changes in agricultural practices combined with modifications of drainage in coastal regions. Such changes will be difficult, if not impossible, to accomplish in the near term. Thus, the best strategies to deal with HABs focus on the integration of highly detailed algal sampling programs, ecological forecasts, and resource management. For example, if HABs can be predicted, then warnings can be issued and areas placed off-limits to fisheries. Such strategies are being employed in areas where HABs are common, including the Gulf of Maine, the Gulf of Mexico, and the Pacific Northwest.
HAB forecasts integrate ecological models, based on the physics, chemistry, and biology of nearshore and offshore regions, with satellite data and in situ measurements of cell counts and toxin levels. For example, models can be used to predict the development and movement of a HAB in the region of interest. Models can be used to identify HAB triggers (i.e., nutrients or temperature), likely areas of cyst seedbeds, likely bloom toxicity based on cell density, and progression of the toxin through the food chain, as well as the ultimate decline of the HAB.
A new threat to fisheries around the world has developed over the last decade---a surge in the number of jellyfish in coastal waters. The most dramatic of these outbreaks is in Japanese waters, where the giant Nomura’s jellyfish has increased significantly, wreaking havoc with fisheries in the Sea of Japan.
Jellyfish populations are normally held in check by fish, mostly because these two groups compete for the same food sources. However, overfishing in many parts of the ocean has led to increasing jellyfish populations. Jellyfish may also be aided by warming ocean temperatures, which favors their development, and by the destruction of habitats of other natural predators such as turtles.
The massive Nomura’s jellyfish is a great threat to Chinese, Japanese and Korean fisheries. These creatures can grow to two meters diameter (the size of large refrigerators) with a weight of 200 kg.
Because of their size, they consume massive amounts of zooplankton, depleting this vital part of the food chain for other organisms. The key threat of the Nomura’s derives from the fact that this jellyfish reproduces extremely rapidly. A mature jellyfish has the ability to produce billions of eggs at a time, and they can do this when they are attacked. Once fertilized, these eggs develop into a resting polyp stage that also has the ability to multiply rapidly, effectively carpeting areas of the seafloor. When conditions are suitable, the polyp reproduces asexually, developing into the medusa stage, which grows into the mature jellyfish.
Once ideal conditions develop, either by increasing nutrients or warming of the surface ocean, Nomura’s jellyfish populations literally explode and render fishing virtually impossible because nets become filled with jellyfish. These jellyfish can also continue to reduce the number of fish in the oceans by feeding on their eggs. Moreover, there is evidence that jellyfish can tolerate conditions, like hypoxia, that fish cannot.
The following video summarizes the impact of giant jellyfish on Japanese fisheries.
Blooms of other jellyfish species are being reported in many other parts of the ocean. In the Gulf of Mexico, for example, the last thirty years populations of two species of jellyfish, the sea nettle, and the moon jellyfish, have exploded especially in dead zones as these are one of the few organisms that can tolerate hypoxia. Jellyfish in the Gulf now swarm over hundreds and perhaps even thousands of square miles each summer.
Here also, invasive species of jellyfish, including the Australian jellyfish, have been reported. Several other factors besides hypoxia have caused the increase in Gulf of Mexico jellyfish. As in the Sea of Japan, overfishing has reduced one of the main jellyfish competitors. In addition, drilling platforms have provided habitats in which jellyfish polyps can multiply. As in the Sea of Japan, jellyfish in the Gulf of Mexico are impacting the fishing industry.
Jellyfish swarms, too, have plagued other regions; they include northern Australia where the highly venomous box jellyfish has expanded its range, the Black Sea, and the Bering Sea off Alaska. Worldwide, jellyfish are one of the few organisms that can thrive in dead zones. With the spread of such dead zones in the oceans as a consequence of marine pollution and climate change, we could be entering the age of the jellyfish.
In this module, you should have learned the following concepts:
You should have read the contents of this module carefully, completed and submitted any labs, the Yellowdig Entry and Reply, and taken the Module Quiz. If you have not done so already, please do so before moving on to the next module. Incomplete assignments will negatively impact your final grade.
There is a new generation of super-rich, highly influential people who are starting to invest massive amounts of money and influence in truly important causes. Bill and Melinda Gates in global health, Warren Buffett in reproductive health and food, the Jolie-Pitts in community development, and the Katrina recovery effort. Now, enter Matt Damon and Gary White who have co-founded water.org, an organization dedicated to developing and delivering solutions to the global water crisis. Visit water.org [227] and you will find an impressive array of information and programs. Here are direct facts from that site that convey the magnitude of the current global water emergency.
More than any other resource, with the exception of food, water is crucial for human survival. Ancient civilizations were repeatedly forced to deal with the threat of diminishing water supply. Now, climate change presents a new threat by causing the supply and distribution of water to change over the coming decades and centuries. This situation will be made significantly more dire by explosive population growth in parts of the world where water is scarce and by pollution that will continually limit the supply of clean drinking water. The IPCC (2007) stated the situation very clearly: “Water and its availability and quality, will be the main pressures on, and issues for, societies and the environment under climate change.” The latest 2022 report stresses the need for adaptation. This will be much easier in the developed world than in developing countries where resources are limited.
Because groundwater systems recover very slowly from human impacts, remediation can be extremely difficult and expensive. In this module, we begin by examining the distribution and behavior of water close to the Earth’s surface; next, we consider how climate change will alter the supply of water and how population growth will change the demand; finally, we present management strategies that will hopefully preserve the supply of water for humans around the globe.
Ancient civilizations developed in some of the driest realms of the planet. Populations in Egypt and Mesopotamia (an area that includes parts of modern Iran, Iraq, Syria, and Turkey) learned how to survive in an arid environment. For example, ancient Egyptians and Mesopotamians constructed an extensive network of canals to transport water away from the Nile River for irrigation. Shadufs, which are contraptions consisting of buckets at the end of a boom which could be lowered with a rope, were used to haul water out of the canals and onto the fields. These civilizations routinely had to live with highly irregular precipitation consisting of periods when large amounts of rainfall flowed through the canals and flooded large areas, alternating with times of almost no rainfall.
As the population has increased, and especially with the rise of industry in developed nations, so has demand for water soared. Moreover, industry has increased competition often for the cleanest drinking water supplies.
Nowhere has the interplay between the increasing demand and limited supply of water been more complicated than in the desert southwest of the US. The city of Los Angeles receives a meager 38 cm (15 in) of rain a year. Yet, the city has the highest water usage in California and some of the highest use rates in the country. You would never know by looking at the number of golf courses and car washes and the abundance of lush, green lawns that the city is located in a desert. The same is true for Las Vegas, which receives significantly lower rainfall and is one of the fastest growing cities in the US.
Los Angeles uses much more water than it receives from precipitation and, thus, it imports water from the northern part of California and from states to the east via the Colorado River. In fact, much of the development of Los Angeles was fueled by this supply of water from the Owens Valley in the Sierra Nevada and the Colorado River to the east. Water from the Colorado River began to flow into Los Angeles in the 1920s and 1930s and included the construction of Parker Dam and the Colorado River Aqueduct.
The growth of other cities that lie in arid locations closer to the Colorado River, including Denver and Phoenix, will likely lead to bitter litigation over water rights in the southwest in the coming decades. Water supply to the Colorado River is declining markedly as a result of climate change and this is clashing with booming growth of these soutwestern cities. In Spring 2023, the US government brokered a deal with the southwestern states that includes a 13 percent decrease in water supply from the river. This will mandate major consrvation efforts and slower growth. Overseas, countries in arid parts of the globe, for example, Turkey, Iraq, and Syria have also had major disputes about water rights and management. Turkey, which lies at the source of the Tigris and Euphrates rivers, has constructed dams on both rivers for irrigation purposes as well as for hydroelectricity, and this has led to long conflicts with countries downriver including Syria and Iraq.
With projections for the increasingly rapid growth of world population and coupled demand for water for drinking and agriculture, as well as for industry, maintaining a clean water supply looks to be one of the grand challenges of the 21st century. The goals of this module are to learn about how water is cycled on the Earth’s surface and how climate change coupled with the growth of the population will accentuate the global water crisis.
On completing this module, students are expected to be able to:
After completing this module, students should be able to explain the following concepts:
Below is an overview of your assignments for this module. The list is intended to prepare you for the module and help you to plan your time.
Action | Assignment | Location |
---|---|---|
To Do |
|
|
The distribution of water on the Earth’s surface is extremely uneven. Only 3% of water on the surface is fresh; the remaining 97% resides in the ocean. Of freshwater, 69% resides in glaciers, 30% underground, and less than 1% is located in lakes, rivers, and swamps. Looked at another way, only one percent of the water on the Earth’s surface is usable by humans, and 99% of the usable quantity is situated underground.
All one needs to do is study rainfall maps to appreciate how uneven the distribution of water really is. The white areas on the map below had annual rainfall under 400 mm for the last year, which makes them semi-arid or arid. And, remember, projections are for significant aridification to occur in many dry regions and for more severe rainfall events to characterize wet regions.
The following video provides a schematic summary of the water cycle.
The hydrologic cycle describes the large-scale movement of water between reservoirs including the ocean, rivers and lakes, the atmosphere, ice sheets, and underground storage or groundwater.
Water evaporates from bodies of water such as the ocean and lakes to form clouds. The moisture in clouds ultimately falls as rain or snow, some of which returns back to the ocean, lakes, and rivers. The remainder percolates into the soil, where it reacts with organic material and minerals and ultimately moves downwards to form groundwater. The amount that percolates depends strongly on evaporation as well as soil moisture, as shown in the video below.
Video: NASA Land Globe Animation (1:00) This video is not narrated.
Freshwater used for drinking, agriculture, and industry derives dominantly from rivers, lakes, and groundwater, with the latter reservoir accounting for approximately 30 percent of freshwater on the earth’s surface by % of potable (i.e., safe drinking) water. In the US, 86% of households derive water from public suppliers, and 14% supply their own water from wells. Nevertheless, households utilize only one percent of water extracted, the remaining 99% of water is supplied to industry (4%), agriculture (37% compared to 69% worldwide), and thermoelectric power plants (41%). Water use in most areas of the US has increased substantially over the last century.
Download this lab as a Word document: Lab 8: Stream Flow [229] (Please download required files below.)
In this lab, we will observe the impact of precipitation on stream flow and flooding. The practice and graded sequence of steps are identical. Please go through the following sequence of questions for the practice, check your answers in the Practice Lab, then take the Graded Lab when ready.
The US Geological Survey maintains the water watch website, which shows the current state of stream flow, drought, flood, and past flow and runoff. We will focus on stream flow data, and you will be required to summarize national trends. The data are expressed as percentiles over normal stream flow for the date of interest. The site has an animation builder [230] that allows you to observe changes in stream flow over short periods and intervals back to 1999. The animations show both regular stream flow and flood stage locations.
Observe the flood and stream flow animations for the following intervals, and describe what you see in terms of major floods and general stream flow. (You can toggle back and forth between these two kinds of animations using the Map Type menu on the animation panel; Real-Time is general stream flow, while the Flood maps show black triangles for places where the streams are actually flooding above their banks.)
Using the USGS animation builder [230], answer the following practice questions:
Because of the significance of this groundwater for human use, we consider the behavior of water underground in some detail here. It might seem complex at first, but water flow follows very simple laws of physics.
Water at the surface of the Earth seeps slowly into the soil, a process known as percolation. Water will percolate through the uppermost layer of soil and loose material that contains air, the aerated zone, down to a level called the water table. The water table is at the top of the permanently waterlogged or saturated level.
The water table is a critical level because it determines the level of groundwater available for drinking and irrigation. The flow of water underground is controlled by a number of factors, including the permeability of the aquifer and the hydraulic gradient. Explained simply, the hydraulic gradient between two wells is the difference in hydraulic pressure (known as hydraulic head) divided by the distance between them. If the difference in hydraulic head is high, water will flow readily; if the difference is nil, then water will only flow if pumped. The hydraulic gradient at points at the top of the water table is generally level.
The permeability of a rock is a function of a number of factors that include the amount of pore space, the arrangement of pores, and the amount of surface tension from grains, especially tiny (micron-sized) clay minerals that have very high surface area. The larger the pore space, the more connected the grains and the less clay, the higher the permeability, and the more easily water flows. Conversely, where pore space is tight and poorly connected, and there is a lot of clay, permeability is low and water cannot flow readily.
The best aquifers are often made of rocks with both high porosity and high permeability, such as sandstone, but rocks with generally lower porosity can also be highly permeable. For example, limestone is often jointed and is readily dissolved by groundwater, leaving the rock highly permeable; rocks such as granite and basalt are often heavily fractured allowing water to flow readily. Some of the most productive aquifers are called “contained” or “confined,” and are sandwiched between low-permeability layers called “aquicludes.” Common aquicludes are shale and mudstone layers. Such contained aquifers can have a high hydraulic gradient because the aquicludes hold a significant hydraulic head; confined aquifers often produce wells called artesian wells that, owing to substantial confining pressure, produce water without pumping.
In other cases, the tops of aquifers are not confined by an impermeable layer. Such aquifers are called unconfined and will, all other things being equal, be characterized by less confining pressure. Groundwater is continuously exchanging with other reservoirs in the hydrological cycle. Aquifers are recharged with water from rain and snow percolating through the aerated zone. Conversely, groundwater flows back into rivers and lakes or into wells and springs in a process known as discharge. The time water spends underground is called the residence time, which varies from a few days to 10,000 years or more. As we will see later, the water table can move downward as a result of drought, and this is happening in arid areas today.
Communities around the world are facing a variety of different problems related to the supply of water. Many of these are not new, having been faced by ancient civilizations, for example, the problem of irrigation in desert regions. A number of problems are becoming more urgent as a result of population growth and demand on aquifers. Improving understanding of groundwater behavior and remediation and advancing technology are helping to solve some of the most pressing problems; however, many groundwater issues continue to become more dire through time, especially in developing nations. Here, we discuss some of the most pressing problems. We stress that these problems are experienced globally, although we provide regional and local examples.
Like any layer in the subsurface, aquifers, and aquitards structurally support the overlying strata, and in turn, the ground level. If an aquifer is excessively pumped, water is drawn in from the surrounding aquitards. In cases where the aquitards are soft and unconsolidated, for example, composed of clays and silts, overpumping can cause these layers to fail structurally, expel much of their water, and literally collapse. When this happens, the overlying ground level can be lowered as a consequence, a process known as subsidence.
In the case of arid regions where aquifers are naturally recharged at very slow rates and where they are pumped intensively, significant subsidence can result. Some of the most drastic and best-known subsidence resulting from overuse of aquifers occurs in the San Joaquin and Sacramento Valleys of California, where the land level has subsided up to 10 meters in the last 90 years.
The San Joaquin and Sacramento rivers flow together in an area called the Sacramento-San Joaquin River Delta, an inland version of the Mississippi Delta where a series of tributary channels meander over a low-lying, flat plain. The area is an inland estuary with the Pacific Ocean on its western edge. The Delta area, as it is known, is some of the most productive farmland in the nation and provides 70% of the water supply of northern California. The water in the Delta channels has been controlled by human-made earthen levees to prevent flooding of low-lying agricultural areas as well as large developed areas including parts of the cities of Tracy, Stockton, and Sacramento. The 2600 mile long levee system has been built over more than 100 years and is beginning to suffer from the test of time. Subsidence has occurred as a result of oxidation of organic material in soils and compaction from farming, and the structures have been weakened by erosion and seepage. Areas behind the levees have subsided by up to 25 feet, placing further strain on the structures. Failure of levees has already occurred over 30 times in the last three decades, leading to substantial flooding, massive evacuation and six fatalities in Marysville in 1997.
Levees in the delta are maintained by the Army Corps of Engineers to withhold the strain of a 100-year flood. However, increased precipitation as a result of climate change has led some to question the Corps’ definition of the 100-year flood, and the same critics warn of catastrophic levee collapse, which could lead to massive numbers of fatalities and enormous property damage. Ultimately, what is required is a significant investment in fortifying levees to prevent this from happening.
Subsidence as a result of overpumping is actually a relatively common problem, especially in areas with rapid population growth, for example around Las Vegas, which until recently was the most rapidly growing city in the US. In Las Vegas, water use has exceeded recharge for many decades, leading to structurally controlled subsidence of up to 2 meters along pre-existing geological faults. Subsidence of some 3 meters has also occurred in the area around Houston as a result of population growth combined with extraction of large amounts of oil and gas from the subsurface.
As we will study in detail in Module 10, significant subsidence in the Mississippi Delta region around New Orleans has resulted partially as a result of over-pumping. Even along the east coast of the US in the Carolinas, subsidence, although not as severe as out west and along the Gulf Coast, has resulted from over pumping for agriculture and industry. In fact, one of the major demands on water in the Carolinas is for golf courses (see the lush grass in the photograph above), which account for about 60% of irrigation usage in some areas.
Without major changes in water usage and conservation, subsidence will continue and even accelerate into the foreseeable future.
Overuse of groundwater does not have to lead to major land subsidence before it causes problems. On a more local scale, over-pumping can result in lowering of the water table in a process called “cone of depression,” a generally concentric pattern of water table drawdown. Such over-pumping often results from industry or agriculture, but individual landowners often feel the repercussions.
Alternatively, a cone of depression can result when housing developments, particularly those with many small lots, use wells for water supply. A cone of depression can drastically decrease water pressure, or worse, lower the water table below the level of the well, leaving a home or a farm without a water supply. The only solution for this is to drill the well deeper, which can be an expensive proposition for an individual landowner. Left unchecked, a cone of depression can modify the flow of groundwater as well as the distribution of pollutants,
Contamination of groundwater supply can occur as a result of natural processes as well as industry and agriculture. Probably, the most lethal and extensive groundwater pollution problem globally is actually natural in origin: the contamination of groundwater with high concentrations of arsenic. Approximately 100 million people globally are exposed to high levels of arsenic in groundwater. Nowhere is the problem more devastating than over large regions of Bangladesh and the West Bengal region of India, where millions have been poisoned by arsenic. This area is intensively irrigated, which has changed the flow of groundwater over a large region. As a result, a shallow aquifer is the source of groundwater for 35-77 million inhabitants who obtain their water from shallow tube wells.
High levels of arsenic in this water likely derive from microbial activity that dissociates arsenic from organic material. Arsenic is highly poisonous and carcinogenic and long-term exposure to it can lead to high incidences of skin lesions, bladder, lung, skin and kidney cancer, respiratory disease, and liver and kidney disease. Because the threatened regions are heavily populated, this pollution has made millions of people sick and caused thousands of deaths each year. Even though the hydrology of the affected areas is not well understood, the solution to the arsenic contamination issue involves a combination of extensive monitoring, closing down high-concentration wells, distribution of filters and chemicals to remove arsenic from drinking water, and ultimately tapping deeper aquifers.
Pollution from agricultural and industrial sources is common, although not always as lethal as arsenic poisoning. Typical sources of industrial pollution include solvents, gasoline and other hydrocarbons, paint, and heavy metals. Pollution from agricultural sources includes pesticides, herbicides, and fertilizers. Many of these pollutants are carcinogenic. Both sources of pollution can lead to the growth of toxic microbes. Agricultural and industrial runoff can deliver pollutants into groundwater systems
Human and agricultural sewage is another potential source of pollution. This pollution leads to a variety of different impacts on health all the way from gastrointestinal illness to, in severe cases, cholera, typhoid, amoebiasis, giardiasis, and E. coli.
You can’t mention lead in groundwater without telling the terrible story of Flint, Michigan. Flint has had a rough economic time with General Motors pulling out of the city in the 1980s, and this is partially responsible for significant unemployment and high levels of poverty. The city is 57% African American. Minority communities have been subject to terrible inequity in terms of access to clean air and drinking water, and Flint is one of the most devastating cases of all.
The city used to derive its water supply from Lake Huron, as did the city of Detroit. This high-quality water was very expensive, and as the city was carrying great debt, back in Spring 2014 the state decided to switch the water management agency and at the same time to supply water to the city from the Flint River. Treating water from a river is far more difficult than treating water from a lake, and the processing facility wasn’t equipped to handle the poor quality of the river water. In particular, the water wasn’t treated with additives to lower its corrosiveness. Moreover, the water had very high levels of bacteria. So the end result was the water delivered to the citizens of Flint came out of the faucets dirty, smelling bad and tasting terrible. Even after citizens protested and showed jugs of this nasty water, officials told them the water was safe to drink. Turns out the water was so corrosive that it stripped lead from the antiquated pipe system of the city. In most cities, old pipes have been replaced, but that was not the case in Flint.
The high levels of the bacterial Legionella led to an outbreak of Legionnaires Disease. This waterborne disease causes a severe flu including respiratory, gastrointestinal and even neurological symptoms, and it can be fatal. In Flint, 12 people died and almost 90 became sick. There have been numerous investigations of the connections between Legionnaires and the Flint drinking water, and in all the most logical finding is that the high bacterial levels were a result of low chlorine in the water because it had reacted with the high lead and iron levels.
But the lead is what is likely to cause the most permanent damage, 100,000 people were exposed to high lead levels including about 9,000 children who drank this dangerous water for up to 18 months. And it took the state 9 months to inform the citizens that they had discovered the lead. Children are more susceptible to long term impact of lead poisoning because their bodies are developing. Lead exposure can cause permanent brain damage, learning and development problems including lower IQ and speech and hearing issues lasting for a lifetime. Tests showed that lead levels had doubled or tripled in Flint children.
The city switched back to old water supply in October 2015, but that was not the end of the story. Lead was still in the water because of the damage to the pipes. The outrage from Flint citizens was a major reason for the state and federal response to the crisis. They joined with environmental and legal groups to petition the EPA to research the environmental impacts and to sue the city and state to provide safe drinking water. And they won. The judge mandated that thousands of lead pipes be replaced and bottled water be delivered to all citizens. Now several years later, the legal battles continue with criminal charges pending for numerous city and state leaders. Most of the active lead-bearing pipes have been replaced, but even now there is still widespread mistrust surrounding drinking the city water.
A serious problem can result from the overuse of groundwater in coastal regions. Here, there is the potential for salt water to flood into the void where aquifers are drained excessively. This process, which is termed saltwater incursion or saltwater intrusion, happens readily because salt water has a higher density than fresh water, hence the pressure under a column of seawater is greater than the pressure under an equivalent volume of fresh water. This results in flow into freshwater aquifers near the coast. Humans and other mammals cannot process large amounts of sodium in water. Ultimately, it leads to renal (kidney) failure. This is why early explorers who became lost at sea were told not to drink seawater. Likewise, salt water kills crops.
Saltwater incursion can occur in one of three ways, all as a result of over-pumping. The first is large-scale, lateral flow into the coastal aquifer, the second is vertical upward flow, and the third is flow into the aquifer from coastal streams and canals, often forced by tidal movements.
Probably the most well-studied example of saltwater intrusion occurs in south Florida, where development combined with highly irregular precipitation patterns have stressed local aquifers. The Biscayne aquifer is the main source of drinking water in the Miami metropolitan area. The aquifer is unconfined, meaning that it is not overlain by aquitards, i.e., it lies at the surface. This renders the Biscayne sensitive to changes in rainfall, evaporation, and over-pumping. Saltwater intrusion occurs as a wedge underneath the surface with a transitional interface with the overlying Biscayne aquifer.
The history of incursion dates back to the 1900s as defined by the first measured increase in salinity (chloride levels) in the Biscayne aquifer. Construction of drainage canals began in 1909 and this resulted in the further inland intrusion of salt water. Intrusion continued unabated until 1946 when salinity-control structures were constructed to prevent inland, tidal movements of salt water. In the 1960s, a large drainage canal system was constructed as part of the massive development of south Florida.
The canals included flow-control structures to prevent excessive drainage from the canal system. However, the design of the structures led to a lowering of freshwater levels in the Biscayne aquifer, leading in turn to increased saltwater intrusion, especially during drought years. Continued movement of the saltwater lens towards the coast and inland has occurred as the new parts of the aquifer have been developed and others tapped less intensively. As in other coastal regions, saltwater intrusion is an ongoing issue that will require constant monitoring as development continues, and demand on aquifers increases. The potential of saltwater intrusion is one issue behind the development of desalinization technology in arid regions.
Sea level rise will increase salinization of coastal aquifers, especially in areas that are dry or subject to seasonal rainfall variability.
As we have seen, climate change will alter precipitation patterns on a global scale, leading to higher rainfall in some areas and significantly lower rainfall in others.
Superimposed on this will be changes in evaporation, runoff, and soil moisture, which will generally exacerbate droughts in areas where rainfall decreases. Generally speaking, regions that are already dry will not get wetter in the next century, and many will become significantly drier.
Regions that are already wet will often become much wetter in the future. Climate change will act in tandem with stressors on the water sector as a result of population increase. Therefore, climate change will generally render precipitation patterns more unequal than they are today. Further, as stated by the Intergovernmental Panel on Climate Change (IPCC),
The negative impacts of climate change on water resources….will outweigh the positive impacts in all regions of the world. Those places in which precipitation and runoff are projected to decline are likely to derive less overall benefit from freshwater resources. In those places that receive more annual runoff, the benefits of increased water flows are expected to be offset by the adverse effects that greater precipitation variability and changes in seasonal runoff have on water supply, water quality, and risk of flooding. (Intergovernmental Panel on Climate Change)
In more detail, recently observed trends of decreasing precipitation over latitudes 30°N to 10°S are projected to continue. Thus, arid and semi-arid regions in the south-central US, Southern Africa, and the Mediterranean are expected to experience decreasing water supply. In some of these regions, water availability is projected to decrease 10-30% by 2050. The IPCC estimates that two-thirds of the world population could be living underwater stress or water scarcity by 2025.
The areas that are expected to suffer some of the worst consequences of changing precipitation are generally some of the least developed and poorest nations, with some of the highest rates of population growth. This combination will likely lead to significantly reduced groundwater recharge, declining surface water reservoir levels, increase in the frequency of groundwater pollution, and most critically, to rapid declines in per capita water availability.
Increased precipitation intensity, more extreme events, increased runoff, decreased infiltration, increased likelihood of contamination with sewage, fertilizers and farm wastes, less ice and snow storage, and increased droughts even in areas that receive more precipitation all will place burdens on water supplies in the future.
Drought could be one of the most serious consequences of climate change from a human and an economic perspective. On a global scale, droughts will likely lead to losses in revenue from agriculture on the scale of billions of dollars, and worse, force the migration of millions of people in arid regions of the world. Not every country can afford to engineer its way out of drought the way that Southern California has done for the last century.
As we have already seen, drought has plagued civilization for millennia and humans have learned to adapt to areas where water supplies are not plentiful or regular. However, the critical difference today is explosive population growth that is placing much more pressure on water supplies. Combined with projections that parts of the globe will become significantly drier in coming decades, drought will likely be much more of a serious issue in the future than it has in the past.
The following video provides an excellent summary of the global drought problem.
Here, we provide two modern-day case studies of the impacts of drought on water supplies in Australia and China and how these countries are responding to them.
Until 2011, much of Australia was in a decade-long drought, providing a grim picture of what the future possibly holds for the driest continent. The Murray-Darling Basin is the most productive agricultural areas in the country, producing a third of Australia’s food. The basin covers over a million square kilometers, about one-seventh of the whole continent, and includes some 20 rivers, most notably the nominate rivers, the Murray and Darling. The region is generally dry (average precipitation is about 500 mm). The total flow of water carried by the Murray and Darling Basin Rivers is significant compared to other Australian rivers, but the amount is dwarfed by the flow of other river systems with equal drainage areas. Thus, the region is prone to drought, and there have been numerous times in the past when the Murray, and especially the Darling, have completely dried up. The Murray- Darling basin produces wool, cotton, wheat, sheep, cattle, dairy produce, rice, oil-seed, wine, fruit, and vegetables. And three-quarters of Australia's irrigated crops and pastures are grown in the basin. Thus, the rivers are vital to the Australian livelihood.
Between 2006 and 2009, precipitation in the mountains in the eastern part of the drainage area, which supplies nearly 40% of the water to the rivers, was lower than at any historical time. Other parts of the basin had a total rainfall deficit of about 1.5 meters below normal for the period 1996-2008. Overall, warmer temperatures that led to higher evaporation rates exacerbated the impact of the drought. For example, the 1oC warming in the basin area is roughly equivalent to a 10% increase in evaporation.
In recent decades, studies have repeatedly confirmed that the environmental health of the Murray-Darling Basin is in decline. On top of the drought, over-extraction of water as a result of past entitlement system has combined with high salinity levels and overall poor water quality, the growth of blue-green algae, declining wildlife, and land degradation to provide a dismal outlook for the basin.
To preserve the water resources of the Murray-Darling, the Australian government has developed a basin plan with a critical provision: an annual water usage (termed a level of take) from the Murray-Darling rivers of 10,873 gigaliters per year (GL/y) that is environmentally and ecologically sustainable for the long term. This take is a cut of about 2750 gigaliters over current levels and will be instituted over a seven-year period. The government is also setting aside some $6 billion to invest in infrastructure, including upgrades to irrigation systems. The plan, which became law in 2012, divides the basin into different surface water (i.e., rivers and lakes) and groundwater areas and sets goals for water usage by agriculture and communities in each of these areas.
Moreover, these districts will have the right to trade water with one another. Overall, the plan is one of the world’s most forward-thinking water use policies. However, it is turning out to be highly controversial. Environmentalists have charged that the plan is too little, too late, insufficient to ensure continued flow through the basin, and not enough to alter the high salt loads in river waters. Politically, there are also significant issues, with some areas targeted for much more drastic reductions in water use than others. However, the policy has the most serious implications for individuals, especially farmers in areas where the most stringent reductions are slated. Enforcement of the water restrictions will almost certainly cause many farmers to go out of business. You can Google “Murray Darling water” for the latest on how this policy plays out.
China faces some of the most serious water issues on the planet. The problems stem from explosive population growth and an inadequate water supply, which has pitted demand for clean drinking water against the demand for industry and agriculture. So in China, drought and pollution combine to make devastating water problems. To put the problem in context, the country has 20% of the world’s population with less than 8% of its water; in other words, the Chinese per-capita water supply is a quarter of the world average. Half of China’s large cities, including Beijing, face a water shortage.
Superimposed on the overall shortage is a significant disparity in supply with the northern tier of China being significantly more arid and the southern tier being significantly more moist. Just under 50 percent of the population of China lives in the northern tier, and close to 60 percent of cultivated land is also in this area, yet only 14 percent of the country's total water resources are found in the region. Production of grain has gradually shifted from the south of China to the north, exacerbating this problem. As a result, the water table is dropping by 1.5 meters per year in parts of the northern portion of the country.
In all, explosive population growth and rapid industrialization have fueled the demand for water nationwide over the last sixty years with the construction of more than 86,000 reservoirs, drilling of more than four million wells, and development of 580,000 square kilometers of irrigated land that generates 70% of the country's total grain production. Generally, lax Chinese environmental controls have led to some of the worst water quality in the world with widespread pollution. Factories are very often situated on river banks for water supply, yet a shortage of water treatment plants results in about 80% of wastewater bring discharged untreated back into the same rivers it came from, and about 75% of rivers are polluted. Worse, approximately 90% of groundwater in urban areas is polluted. Unfortunately, farmers have no choice but to use contaminated water for their crops. And an estimated 700 million people drink contaminated water every day. In some parts of the country, high incidences of digestive cancers (stomach, esophagus, intestine) have been tied to water pollution.
Mismanagement of water resources is commonplace. Diversion of rivers for industrial purposes and irrigation has caused water shortages in areas that once had a steady water supply. The Yellow River, once a sizeable waterway and source of water for agriculture, has been diverted for irrigation and dries up for increasing portions of the year, in 2010 for more than 200 days. As in many parts of the world, industrial demand for water has trumped demand for agriculture. Even when water remains for agriculture, a large amount is wasted through evaporation. The total lost from canals and irrigation systems is 60-80% of the supply.
The following video discusses the water pollution problem in China. Watch the first 10 minutes or so.
Water shortage presents a major obstacle to growth in China, moreover, pollution is a potential environmental catastrophe. To increase the supply of water to areas in the north of the country, China has developed one of the largest public works projects in the world, the South-North Water Diversion Project. This program is designed to divert water from the Yangtze River in the middle of China to rivers in the northern part of the country. Three major routes are being considered for this project, each consisting of tunnels, canals, and dams. However, the project is extremely expensive and its success is not completely ensured, thus plans remain in limbo. In the meantime, the Chinese government pledged $600 million in 2009 to improve water management and combat contamination problems.
You are certain to hear a lot more in the future about continued attempts to provide safe water for the Chinese population and agriculture, especially in the light of climate change.
In areas that are forced to deal with more regular droughts and less regular rainfall, a number of management strategies will become increasingly vital over the coming decades. In rural areas especially in underdeveloped countries, potential strategies include techniques already being piloted in many places including rainwater storage, household treatment using filters, planting of drought-tolerant crops, and drilling of shallow boreholes or tube wells. As we have already seen, poor and disadvantaged populations in developing nations will bear the brunt of adverse effects of climate change. Yet these nations also have less potential to adapt as a result of limited resources. Thus, developed nations will be under great pressure to help their developing counterparts. Adaptation will be the most difficult for sub-Saharan Africa.
In more developed countries, management strategies include conservation, groundwater recharge, storm-water control and capture, preparation for extreme weather events, diversification of the water supply, and resilience to changes in water quality.
The following video is a great summary of our current water situation and what you can do to help out:
It is likely that many or all of these strategies will be required for populations to adapt to declining water resources. One technology, desalinization, has great potential to provide large quantities of water in arid regions, especially those along coastlines.
My one experience with boogie boarding in Hawaii was a disaster. I got caught in no man's land with 15 feet waves breaking on me and my board around my ankles. It took me five minutes to escape to shore, by which time I had consumed a lot of seawater. I had to take a night flight back to the mainland and all I can say was it was not fun! The largest body of water on Earth is the ocean. Desalinization, the removal of salt from seawater, offers great promise to supply citizens in arid regions in the future to come. Here we explore the technology and potential of this technique.
Water desalinization (often termed desalination) has enormous potential for supplying clean water for drinking as well as for irrigation, especially in regions that are arid or have irregular precipitation and are near the ocean. Desalinization is carried out in a number of ways. The most productive method in terms of the amount of water produced is multi-stage flash distillation (MSFD), which produces over 80% of the global volume of desalinized water today. MSFD is carried out in a plant divided into different units, each with a heat exchanger and a collector for the condensate.
The units or reservoirs are maintained at different temperatures, and critically, also at different pressures. The pressure of each reservoir is determined by the boiling point of water at the temperature of the reservoir (lower temperatures require higher pressure for boiling). A brine-heating unit is positioned near the highest-temperature reservoir. Seawater coming into the plant is pumped from the coldest reservoir towards the hottest reservoir and is gradually heated by water traveling the other side of heat exchangers. When the water is pumped into the brine heater it is heated further, then it is cycled back progressively through the lower temperature stages, returning on the other side of the heat exchangers that warmed it on its entry to the plant. In each of these stages, the water is above the boiling point and is warmer than the water on the other side of the heat exchanger. This water then begins to condense leaving desalinized water and brine, which settles in the reservoir. The key aspect of the technique is that it is extremely energy efficient, as water provides much of the heat to itself. However, there are issues in that the water produced still can have impurities if there isn’t significant treatment before entry into the plant. In addition, the technique leaves a large about of brine that needs to be disposed of (this waste is usually disposed of in the ocean).
There are other desalinization processes that also use distillation for the removal of salt and other chemicals. However, the main alternatives to MSFP are those that use reverse osmosis (RO). RO is the most common process used in desalinization, even though RO desalinization plants currently produce about 15% of desalinated water by volume.
Like MSFD, RO requires significant pretreatment to remove solids and bacteria, and to adjust the pH and chemistry so that products such as calcium carbonate and metal colloids do not form. This is critical in the case of seawater, which contains high amounts of turbidity, and organic materials that can clog RO membranes. RO takes place when water is exposed to pressure as it passes through a membrane. As its name implies, the process is the reverse of osmosis, which is the process whereby solutions separated by a barrier such as a membrane flow from the side with low concentration to that with high concentration. When pressure is applied to the membrane in excess of the osmotic pressure, the fluid will flow from the side with the high concentration to that with the low concentration. In so doing, solutes remain on the membrane and the fluid flows from one side to the other. In RO plants, water passes through a number of membranes before it is pure enough for drinking.
The key factor besides purity in the viability of desalinization to produce large quantities of drinking water and water for irrigation is cost, usually referred to as the cost per volume of drinking water produced. The most significant cost is the construction of the plant, but once developed, the key expense involved in desalinization is that of energy. The increasing price of energy could limit the viability of desalinization in many places.
Desalinization is critical to growth and sustainability in countries in the Middle East, and much of the technology was developed here. Today, Saudi Arabia is the largest producer of desalinized water, followed by the United States. In the US, desalinization plants are focused in California and Florida. Countries such as Australia, with extensive arid regions and highly irregular precipitation, are gearing up to increase the amount of water produced by desalinization. For example, in Australia, investment in desalinization will involve a tripling of the number of plants between 2004 and 2013.
Desalinization technologies have applications beyond seawater. For example, desalinization is applied to treat groundwater in inland areas that are too salty for drinking or for irrigation, for example in the El Paso region of Texas. Desalinization can also be used to treat effluent from sewage treatment plants.
In summary, the future of desalinization is very promising, and this technology will likely play an increasing role in countries that can afford to develop it.
Even in regions where desalinization has the potential to add water and strict management practices are underway, water is such a vital commodity that water rights of communities, cities, and even states are often contested in court. Such legal battles sometimes stem from old agreements about the distribution of rivers and groundwater between municipalities that were drawn up before substantial growth occurred. With population growth requiring water for drinking, domestic use, agriculture, and industry, the value of water has increased substantially, and old agreements are often extremely prohibitive to growth. Some of the most bitter water disputes occur in the western US, where, as we have seen, southern California relies heavily on water derived via aqueducts from the Colorado River to the east and the Owens Valley in the Sierra Nevada Mountains to the north.
The City of Los Angeles has had brutal showdowns with farmers and environmentalists in the Owens Valley, from where it derives about half of its water. The city built the first of two aqueducts from the valley between 1908 and 1913 and the second in 1970. These aqueducts substantially lowered water levels in Mono and Owens Lake and the Owens River and took a terrible toll on farming in the Owens Valley. The impact was so negative that farmers used dynamite to breach the aqueduct and temporarily return the flow to the Owens River. After the second aqueduct was built, a series of litigation began between municipalities in the Owens Valley and ultimately the Sierra Club. The net result has been rulings in favor of the Owens Valley, and some increases in water levels in bodies such as Mono Lake, but ultimately southern California continues to withdraw water at a faster rate than it is being replenished, so the conflict is by no means over.
To the east of Los Angeles, water rights for the Colorado River were defined by the Colorado River Compact of 1922, which divided states bordering the river into upper basin states (in the Rocky Mountains) and lower basin states (in the plains to the west). The compact appropriated the annual amount of water each group of states could withdraw from the river with the upper basin states receiving the same amount as the lower basin states.
Today forty million people from Wyoming to Mexico receive water from the Colorado River, so the river is vital to communities small and large and for residential and agricultural use. Since the compact was developed, the lower basin states (Arizona, California, Nevada) have developed especially rapidly and now use a lot more water than they did in 1922. Cities such as Phoenix and Las Vegas have experienced some of the most rapid growth in the country.
The compact was modified when the Hoover Dam was constructed, at which time the lower basin states were allocated annual withdrawal amounts. These amounts have led to fierce litigation between Arizona and California, which changed the appropriations in Arizona’s favor. For a long time, only California has completely utilized its quota each year and its surplus was guaranteed by the Secretary of the Interior until 2016. By that time, surging development in Arizona and southern Nevada required full use of their quotas from the Colorado so that the surplus was no longer available to California.
Two major reservoirs exist in the lower Colorado River basin, Lake Mead and Lake Powell, bounded by the Hoover Dam and the Glenn Canyon Dam, respectively. These reservoirs were designed for water management, but both have been drying up recently. The situation is dire in both reservoirs, as the images below show. Let’s start with Lake Powell. This reservoir provides water and electricity generated through turbines in the Glen Canyon Dam to millions of people. The level in Lake Powell is lower than it has ever been. As of June 2023, the level was at 3580 feet with the normal level being 3700 feet. If the level drops below 3490 feet (a level known as “dead pool”), water cannot flow downstream to the lower basin states from the reservoir. In addition, the dam would not be able to generate electricity, potentially cutting off power to millions.
Lake Mead’s issues may be even more pressing, as the lake provides 90 percent of nearby Las Vegas’s water. The largest reservoir in the country had a level of 1049 feet in May 2022 which was 170 feet below the maximum capacity. The level was so low that sunken boats resurfaced and an intake valve (for pumping to Las Vegas and other communities) was exposed. Las Vegas was taking water from the lower intake valves, which were installed to retrieve water at lower lake levels. Fortunately, there was a massive amount of snow in the mountains in the winter of 2022-2023 and the level has risen somewhat. But regardless, Las Vegas is planning for a future when low water supply is the new normal and frequent dead pool” events when no water flows out of Lake Mead. Fortunately, a third intake valve and pumping station for Las Vegas’s water has been installed below the dead pool level, so the city will still receive water, but the city is already imposing severe water restrictions including banning grass in yards and strictly limiting watering of grass on golf courses. The city also recycles a lot of its water. These and other measures have been successful in reducing the demand for water: over the last twenty years the population has grown by 49% but water use has shrunk by 26%. Regardless, the future looks bleak as the decades long drought in the area is forecasted to continue.
So, the situation is dire for both Lake Mead and Lake Powell, and recently in 2023 the US government brokered a temporary deal whereby the lower basin states (California, Arizona and Nevada) must lower their water extraction from the Colorado by 13 percent. However, a longer term deal must be reached by 2027 and this will likely involve some tough negotiation. Essentially, the original 1922 compact was developed at a very wet time in the west, and the upper basin states (Colorado, Wyoming and New Mexico) can’t afford to give 50% of the water to the lower basin states when they need the water to fuel growth in cities such as Denver and Albuquerque as well as provide the water farmers and ranchers desperately need.
A sign of the times to come, Phoenix just imposed restrictions on development in the fastest growing suburbs where the supply comes from groundwater. The new rules say that no new development can take place without an alternate source of surface or recycled water. Such controls are likely in all of the southwestern cities in the future as climate change leads to even lower water supplies.
In this module, you should have learned the following concepts:
You should have read the contents of this module carefully, completed and submitted any labs, the Yellowdig Entry and Reply and taken the Module Quiz. If you have not done so already, please do so before moving on to the next module. Incomplete assignments will negatively impact your final grade.
Locusts, a group of swarming grasshoppers, are legendary pests. Plagues of locusts that demolished crops are described in the Bible and Quran; more recently, such plagues have occurred episodically in North Africa, the Middle East, and elsewhere. And research suggests that climate change may make such plagues even more frequent in the future. Locusts include species of grasshoppers that have solitary and swarming phases in their life cycle. Research indicates that swarming occurs when the density of locusts is elevated. Swarms can include billions or even trillions of individuals and each locust can fly as far as 500 km and eat the equivalent of their own weight in a day. A single swarm can eat enough food for 2,500 people in one day! Locusts, especially the desert locust of North Africa, have been known to completely demolish crops, and the only strategy is to hold populations in check with pesticides. There is evidence that swarms develop best when unusually wet weather is followed by prolonged warm conditions. The wet phase, which causes lush vegetation to grow, allows for prolific reproduction, and the warm phase enhances the gregarious, swarming behavior. Since climate change will cause more frequent intervals of elevated precipitation and heat waves, plagues may be more frequent, more populous in places such as North Africa, Southern Europe, the Middle East, China, and Australia. Such massive and unrelenting plagues will be difficult to control with pesticides.
Insects are just one of many challenges that will make it harder and harder for humankind to receive ample nutrition. As we will show in this module, food is most definitely one of the Grand Challenges of the 21st Century, and feeding the increasing global population will require a very different approach to the production and distribution of crops and other sources of nutrition. Climate change will make matters increasingly difficult, in particular in regions where food shortages already exist.
On completing this module, students are expected to be able to:
After completing this module, students should be able to answer the following questions:
Below is an overview of your assignments for this module. The list is intended to prepare you for the module and help you to plan your time.
Action | Assignment | Location |
---|---|---|
To Do |
|
|
Earth is becoming increasingly crowded. Global population stands at just over 7 billion and is rising by 78 million people per year. Now, if that number does not sound like a lot to you, think about it this way--we are adding close to the population of Germany every year to our crowded planet! Currently, 134 million children are born every year, that is 367 thousand per day, and there are approximately 56 million deaths per year. At this rate, the population is predicted to reach 8 billion by 2027 and about 9 billion by 2046. It's mind-boggling to think that global population when many of you were born stood at just over 5 billion! The graph below shows the massive increase in global population as well as these scary projections. It demonstrates how global population increase has been driven by surging populations of Africa and, especially, Asia.
The video below portrays the recent growth in population as well as the outlook.
In 1798, Thomas Malthus wrote An Essay on the Principle of Population. In this work, he predicted that populations of nations would be restricted by the availability of food because nations would not be able to control birth rate. The essay has been intensely debated by evolutionary biologists, economists, and many others for the last two centuries. Repeated famines around the globe generally support Malthus' hypothesis.
Regardless, we find ourselves in a position today where the world is currently not able to feed all of its inhabitants. Currently, more than one billion people are estimated to lack sufficient food, and more than twice that number do not receive adequate nutrition. This situation will likely become a lot more dire in the future. In fact, predictions for population increase diverge significantly in the later parts of the century because some models assume mass mortality due to widespread famine. To make matters worse, climate change is predicted to cause major problems to crop yields, especially in parts of the world where the population is growing the fastest. These shortages will likely lead to mass migration of huge numbers of people, possibly entire nations.
Agriculture is an essential part of every society. Forty percent of the Earth’s surface is managed for cropland and pasture. That is more than for any other activity including forestry, which comprises 30% of Earth's area. In underdeveloped countries, approximately 70% of the population live in rural areas where agriculture is the major activity. There are many diverse sources of food, and populations around the world have very different diets and demands. In addition to crop and livestock, fish is a stable diet in many countries. Projections for climate change differ significantly for these various food sources, thus we will discuss them separately. For crop farming, in particular, the impact of climate change is very different for agro-businesses run by multinational corporations in the developed nations than for family-operated farms in the developing world.
Climate change will have very different impacts on different continents, and socio-economic factors will govern the abilities of nations to respond. Even though the south-central and southwestern US are likely to face extreme drought in the coming decades, inhabitants and businesses should by and large be able to adapt. It will be a starkly different situation in Southern Africa, India and Southeast Asia, Mexico, northeast Brazil, southern Africa, and West Africa, where climate change will be coupled with a general lack of coping mechanisms. In all of these regions, the impact of climate change, and the inability of societies to fully cope with it will potentially result in security issues including soaring food prices and military conflicts.
Food shortages will not be a new problem in many places. Mass migrations driven by food supply have been an important part of human evolution and a tragic part of the history of many nations, especially those in Africa. As we saw in Module 2, famine caused by drought is thought to have caused the collapse of the Mayan civilization in Central America around 850 AD. More recently in China, famine in the late part of the 19th century caused the death of some 60 million people. Famine is possible even in the developed world. The Irish potato famine between 1845 and 1852 caused by a potato disease called potato blight led to approximately 1 million deaths, between 20 and 25% of the population of that country. This problem does not appear to be improving with time. In the 20th century, 70 million people are thought to have died during famines: 30 million alone in China between 1958 and 1961, and 7-10 million in India in 1943.
Nowhere is the history of famine more devastating than in Africa. Famines on the African continent, like elsewhere, have often resulted from a combination of drought and political conflicts, oppressive military regimes, and war. Most recently, the conflict in the Darfur region of Sudan erupted over water in 2003. One side of the conflict, consisting of migratory farmers, has displaced large numbers of sedentary farmers to parts of the country without sufficient food and water. Mass mortality has largely been caused by disease. Estimates of the number of dead are uncertain, but are between 180,000 and 460,000. This number is much less than the number of mortalities in the Biafran conflict of Nigeria between 1967 and 1970 when nearly a million people died from conflict and starvation. Today, more than a third of Africans suffer from hunger.
The following video portrays the tragic history of famines.
A number of different physical variables impact agriculture. These include temperature, precipitation, humidity, wind speed, and radiation. The absolute levels of these variables and their variability on a daily, monthly, and annual basis, affect crop yields as well as livestock health.
Crops are particularly sensitive to absolute temperature variation even over short time scales, in some cases a few hours (for extreme cold) and days (for warmth). Likewise, extreme events such as floods, and inter-annual variations in rainfall connected with cycles such as ENSO can also impact crops significantly. For example, the major drought in Australia from 1998-2010 led to significantly lower crop yields. Major cold snaps in Florida in 1983 and 1985 killed a third of all citrus trees, with an accompanying loss of $2 billion. At the other end of the spectrum, the North Atlantic Oscillation has caused sunnier summers in Britain, leading to increased wheat yields.
As it turns out, anthropogenic impacts can greatly magnify the effects of climate change on crops, livestock, and fisheries. For example, soil erosion, overgrazing, air pollution, salinization of groundwater, and pests and overuse of pesticides tend to exacerbate the impacts of the changing climate such as droughts and heatwaves. Here, we describe the forecasts and impacts of changes in climate variables, followed by anthropogenic changes.
Model forecasts under SRES A1B and A2 are for 2-4oC warming by 2100 with the most significant increase in high-latitude regions. In addition, the forecasts indicate a much higher likelihood of heat waves in the future. As it turns out, an increase in average temperature can have a positive impact on agriculture, lengthening the growing season in regions with cool spring and fall seasons. Regional and global simulations allow predictions of temperature increase on crop yield. The results (see figure below) show that modest temperature increases produce increased yields for some crops. Warming will also lead to a decrease in the occurrence of severe winter cold stress on crops, causing a pole-ward shift in the feasibility of regions for agricultural activities. This is especially important for high-yield tropical crops such as rice. Warming will have a greater impact in the Northern Hemisphere, where there is more cultivated area in high latitude areas.
However, warming ultimately reaches a limit where yield curves start to decrease for all crops. Globally, this threshold is reached at temperature increases over about 3oC. Crop yields also decline precipitously at temperatures above 30oC; although plants develop faster in warm temperatures, photosynthesis has a temperature optimum in the range of 20° to 25°C, and, above this range, plants have less time to accumulate carbohydrates, fats, and proteins. Because individual plants cannot move, they have developed mechanisms that allow them to tolerate higher temperatures and adapt to changing soil conditions. These mechanisms include the production of proteins that lessen heat-shock, as well as the ability to conduct photosynthesis during periods of heat. Such adaptations will partially determine where a crop can survive the impact of climate change.
Warming will have negative impacts on crop yield in regions where summer heat limits production, and it will lead to more frequent extreme high heat stress on crops. Heat stress varies by plant but includes lack of emergence of new plant material or damage to it, water deficit as a result of high evaporation, damage to reproductive development, and death. In addition, climate change will lead to increased soil evaporation rates, which stress crops and also increase the intensity of droughts.
Heat waves are usually known for their human toll. For example, the Great Chicago heat wave of 1995 led to over 700 deaths as a result of five extremely warm and humid days with a heat index reaching 125 degrees. A prolonged heat wave in Northern Europe in 2003 killed more than 40,000 people and led to a 20 to 36% decrease in the yields of grains and fruits. Heat waves are defined differently in different places, but usually, are defined as a specific number of days over a certain temperature. Prolonged heat waves can also cause significant damage to crops and livestock, with major economic losses. In Russia, an extended heat wave in 2010 caused 50,000 deaths and a loss of 25% of the grain yield at a cost of $15 billion. In the central US, more than 6000 cattle died in the July 2011 heat wave. And the heat wave of 2012 in the same region resulted in the worst corn crop in two decades.
Models indicate that precipitation will increase in high latitudes including places such as Northern Europe and Canada, but decrease in most subtropical land regions, including the southern tier of states from Texas to California. Droughts will become longer and more intense in these areas. A decrease in precipitation can reduce soil moisture over the short term and increase soil erosion rates over the long term. Likewise, as we have seen, the intensity of extreme events such as cyclones and hurricanes is likely to increase, also leading to potentially more significant crop damage, and also potentially, soil erosion. Waterlogged soils can cause severe damage to root systems and limit the uptake of nutrients. Flooding can cause permanent loss of many crops. One of the most significant periods of flooding in the US took place in the Midwest in 1993 when the Mississippi and Missouri rivers flooded their banks and submerged huge areas of farmland, a total of nine million acres. The estimated crop losses during this event were $7 billion. In the 2008 Midwest floods, Iowa alone lost $4 billion in damaged crops.
Precipitation may be crucial for determining the impact of climate on crops. Decreases in precipitation and evaporation-precipitation ratios in marginal areas that are currently entirely fed by rain may change ecosystem function to the point where irrigation is required. Estimates suggest that impact of increased evaporation will require increased irrigation requirements of 5-8% globally with higher amounts (15%) in Southeast Asia.
A key variable controlled by a combination of heat stress and rainfall is the 120-day growing period, the minimum duration required for crops such as corn to survive. Climate change has the potential to shrink the 120-day growing period, and this will greatly impact the sustainability of crop production.
Drought can be devastating to agriculture. During the decade-long “Federation” drought in Australia, from 1901-1903, an estimated 52 million sheep perished. In the long Dustbowl of the 1930s, over 75% of topsoil was lost in areas of Kansas, Oklahoma, and Texas, and crops were ruined. In fact, in many areas, agriculture never recovered from the drought, and the economic losses and human suffering are still legendary.
The following videos describe recent droughts in Texas and Australia:
All SRES emission scenarios call for CO2 levels to increase significantly over the course of the 21st century. Even with drastic actions, levels are predicted to reach 550 ppm mid-century before decreasing. Worst-case scenarios have levels continuing to rise beyond 650 ppm at the end of the century. Compared to other climate variables, increasing CO2 generally has a positive impact on crops, leading to increased crop yields (see figure below). CO2 is key to photosynthesis; the gas is what is known as a limiting nutrient for plant growth. Without a certain amount of CO2, plants will fail to grow. CO2 acts as a fertilizer for crops like rice, soybeans, and wheat and enhances growth rates. The impact of increased CO2 is only known via experiments, and they show increases in production from 5-20% at CO2 levels of 550 ppm. The key uncertainty is how realistic experiments are at predicting the real world. There is a consensus among scientists that the real changes in yield might be slightly less than those in the lab.
Pollution from industry will increase tropospheric ozone levels. Ozone levels in the lower atmosphere are determined by both emissions and temperature, thus ground levels are almost certain to increase. Higher levels of ground-level ozone limit the growth of crops.
As with temperature and precipitation, the negative impact of increasing ozone on crops will offset the beneficial impact from elevated CO2 levels. At the same time, the ozone layer is becoming thinner in other areas, leading to increases in UV-B exposure. The future impact of ozone and UV-B on crops is not completely understood, as the increasing CO2 may possibly increase or decrease the effect. Increasing ozone and UV-B exposure can lead to reduced rates of photosynthesis and a number of other measures of crop stress, including sensitivity to drought.
Food production will be affected by a number of other factors, including rising sea level (see Module 10) that will swamp low-lying coastal areas that include some of the most productive areas of the world today. The potential area of crop production is also being reduced by desertification and salinization (increase in harmful salt levels in topsoil as a result of excess evaporation) as well as soil erosion over vast areas of the world. Soil erosion can result directly from climate change, for example, from increased precipitation in major storms. However, it is also a product of over-cultivation of crops and other poor agricultural practices.
The impact of climate change (summarized in the figures below) on North American agriculture varies significantly by region. Projections suggest yield increases of 5-20 percent over the first decades of the century as a result of warming and higher CO2, with generally positive effects for the nation as a whole for much of the century. However, regions of the continent will be much more vulnerable than others. In particular, the Great Plains will likely face declining yields as a result of drought. Crops that are limited by growing season, for example, fruit in the northeastern US, will benefit from improved growing conditions, whereas those crops that are near their climate thresholds, for example, grapes in California (as a result of low rainfall) will likely face lower yields and poorer quality. Drought in California will likely impact the yields of numerous crops.
To feed the burgeoning global population, food supply is going to have to increase, but it is not going to be easy. The Food and Agriculture Organization (FAO) predicts there will be a 55% increase in crop production from 2030 and an 80% increase by 2050 (both compared to 2000 levels). To allow for this increase, a 19% increase in rain-fed land area and a 30% increase in the area of irrigated land will be required. This areal increase will take place in developing countries, including Latin America and Sub-Saharan Africa. Crop yields/acre are expected to rise in these countries. Even still, the rate of growth in global crop production is predicted to decline from 2.2%/year in 1970-2000 to 1.6%/year in 2000-2015, 1.3%/year from 2015 to 2030 to 0.8%/year from 2030 to 2050. Thus, producing sufficient food to feed our growing population will become increasingly challenging. Here, we discuss the different types of agricultural business and how they will change in the future.
Subsistence agriculture refers to rural production in developing countries where farms are run by families, and farming provides the main source of income. Seventy-five percent of the world's 1.2 billion poor people live in rural areas. These people, with poor technology and more limited access to markets, have a much more difficult time farming than large agribusinesses. Subsistence and smallholder agriculture (SSA) is more threatened by climate change than any other type of agriculture largely because of the lack of technology and limited resources to fall back on during tough times. However, SSA is also remarkably resilient. The availability of extended family labor and indigenous knowledge can overcome significant hardship.
SSA will likely decrease in importance with the gradual migration of the population of under-developed countries from rural to urban areas. Urban population recently overtook rural population, and in rural areas, there has been a movement away from SSA to other forms of subsistence. Thus, in many areas of the world, SSA is becoming increasingly rare as a way of life.
By far, the most important source of food for humans are grains, wheat, corn, and rice. All three crops will be hit hard by heatwaves in places such as the Canadian, US and Russian heartlands which are key grain producing regions. However, wheat is likely the most susceptible crop. Wheat is a cool season crop and recent research suggests that warming over the last 50 years has resulted in a 5 percent decline in production. Climate change is projected to cause a hotter summer every other year than the hottest summer recorded, and this would cause a 25% decrease in wheat production.
Drought will impact the production of all three grains, drive up their prices, and potentially lead to famine and political unrest.
Pasture includes both grassland and an ecosystem known as rangeland, which includes deserts, scrub, chaparral, and savannah. Grassland often occurs in semi-arid locations, such as the steppes of Central Asia and the prairies of North America. Grassland can also be found in wetter locations, for example, northwestern Europe. Rangeland is found on every continent, especially in locations where temperature and rainfall limit other vegetation types. Grassland is very sensitive to climate change because many grass species are fast growing and have short growing seasons. In the lab, increases in CO2 has been shown experimentally to decrease grassland diversity.
As we have seen, heat stress also decreases the productivity of livestock, especially cattle, as well as fertility, and can be life-threatening. Conception rates are also particularly an issue for cattle that breed in spring and summer months. Heat stress puts a limit on dairy milk yield regardless of food consumption.
The world’s fisheries are in a state of crisis. Environmental changes and pollution combined with over-fishing and the rise of invasive species are deteriorating the health of the global fishing industry. Nothing is more symptomatic of this decline than the history of cod in the North Atlantic. Cod has long been a staple diet for societies in the region including Iceland and Scandinavia, and a century ago, they were so abundant that whole fisheries thrived on this one fish. Now, however, cod has been so overfished that conflicts over fishing rights have erupted (between Iceland and England in the 1950s to 1970s, dubbed the "cod wars") and populations have plummeted. The same countries that depleted the stock of cod are now overfishing other species, including haddock and skate.
All of this comes at a time of growing consumption of fish. The Intergovernmental Panel on Climate Change (IPCC) predicts that global fish production will increase from 2012 to 2020, but not as rapidly as will demand. Wild fish will comprise the majority of fish caught in sub-Saharan Africa and the US, but in other parts of the world, aquaculture will increase in importance. It could be that aquaculture or fish farming will dominate fisheries by the end of the century. However, aquaculture has its own set of issues. For carnivorous fish such as salmon, fish farming utilizes a considerable supply of feeder fish. Moreover, fish farms are at a significant risk of environmental problems and are susceptible to outbreaks of disease.
The superb overview of the state of the modern ocean by Jeremy Jackson identifies several distinct habitats and ecosystems that are in a significant state of decline:
As a result of all of these changes, over 100 species of fish are currently on the extinction watch list. Estimates suggest that biomass produced by fish has declined by more than 50% in the last 40 years. Some scientists are warning of a complete collapse of marine fisheries by 2050, which would be a devastating problem for communities that rely on fish as the major source of food.
The following video describes the stark future of global fisheries as a result of overfishing:
“For every pound [of seafood] that goes to market, more than 10 pounds, even 100, may be thrown away as by-catch.” Sylvia Earle 90% of large predatory fish such as tuna, swordfish, sharks, are now gone.
90% of large whales, 60% of the small ones are also now gone from estuaries and coastal waters. 100 million sharks are killed every year. 100,000 albatross are killed every year while fishing!
A study done by the Dalhousie University of Canada projects that by 2048 all the species that we fish today will be extinct. That is in 38 years.
So, aside from those of us who enjoy the occasional salmon sashimi, spicy tuna rolls, salted grouper, or pan roasted Chilean sea bass, why should humanity care about the extinction of these species? We are destroying a food chain system kept in balance by evolution through millennia. There will be no big fish to eat the medium fish. And too many medium fish to eat all the small fish. And then there is no one to eat the really small organisms: the plankton, the algae, etc.
The result, slime: shorthand for the increasingly frequent appearance of dead zones, red tides, and jellyfish that, when they die out, sink to the bottom of the ocean to mix with dissolved oxygen while they rot. Nothing can live in these oxygen-depleted waters, except… bacteria. So it’s like we are getting rid of the complex, sophisticated organisms that took millions and millions of years to develop.
And… replacing them with the most basic ones. Not the wisest of evolutionary strategists, are we?
Green Forum Oceans
abc planet
In the US and Canada, warming trends have led to earlier spring activity of insects. Down the road, entomologists predict that many species of insects may thrive on a warmer planet, and populations may explode. This is because research shows that insects that are adapted to warmer climates have faster population growth rates. Insect species generally have short life cycles, fast and prolific reproduction and rapid mobility, thus warming can readily result in massive increases in populations.
Insects have a number of responses to warming. Some species may avoid warmer temperatures by moving to cooler regions. Others may adapt to warmer climates by changing their biochemistry or their behavior. Of course, some insects may not adapt at all and disappear, but those that can adapt will thrive in warmer environments. In general, insects are expected to benefit most in mid and high latitudes. For example, in Alaska and Siberia, longer summers are already producing demographic explosions in defoliating and wood-eating insects that have led to the devastation of thousands of forest acres and millions of dollars in damage. In the future, it is certain that other groups of insects, locusts, and many others will cause even more significant losses for crops across the globe.
The research on weeds and climate change is a little more complicated. Some weeds are not expected to thrive with higher CO2 levels. These are weeds that belong to the C4 plant groups that typically do well under lower CO2 conditions. However, some of the most invasive weeds belong to the C3 plant groups. These include Canada thistle, star thistle, quackgrass, lambsquarter, and spotted knapweed. In addition, the use of herbicides to control weeds is potentially problematic at high CO2 levels. Experiments suggest that certain weeds become tolerant to chemicals such as Roundup at high CO2 concentrations.
Biofuels are fuels that derive their energy from biological carbon fixation via photosynthesis. Biofuel sources include a whole variety of plants such as corn, sugar cane, soybeans, sunflowers, maize as well as aquatic algae. The most common compounds used to make fuels are sugars, starch and vegetable oil. A wide variety of fuel types are included under the biofuel umbrella including bioalcohol (most often known as bioethanol), biodiesel, vegetable oil, and solid biofuel. Biofuels were recently considered a vital part of the world's future energy portfolio, and the most compelling argument for their production in the US was energy independence and the low of cost production. Recently, however, traditional biofuels have fallen somewhat out of favor, partly as a result of environmental and ethical concerns, and partly due to the surge of natural gas production.
Biofuels are a very complicated and constantly evolving issue as research on them intensifies and the global energy portfolio changes. Research is being directed at fuel production, for example, the development of fuels that produce the most energy and the least land area to grow. However, the key ethical issue is that the production of biofuels uses land that could also be used to grow crops to feed people. In Brazil, which is the world's second-largest producer of bioethanol, large agribusinesses are devoted to its production. However, subsistence farmers often make more money producing biofuels than crops for food, and this has led to a loss in land area for producing crops for consumption. In addition, deforestation to develop acreage for biofuel production helps to accelerate climate change. Finally, the use of agricultural land to produce biofuels has the potential to drive up the price of food.
These ethical issues are forcing the biofuel industry as well as governments around the world to invest in research into fuels that are ethically acceptable. If biofuels are to be an accepted part of our energy future, fuel sources must be developed that require less land area and less water per unit of energy. Alternatively, fuel such as cellulosic ethanol can be produced from crops or waste products that cannot be consumed; other potential fuel sources include aquatic algae and agricultural or human waste.
The following video describes the advantages and disadvantages of biofuels.
Agriculture in developed countries is less likely to be vulnerable to climate change than in developing nations. In both places, the impact will depend on the nature of the climate changes, as well as the ability of agriculture to adapt through technological advances and changing food demand. For example, the management of water resources (see Module 8) will be critical for agriculture in arid and sub-arid environments facing increasing drought. Advanced soil management and crop rotations can also help maintain healthy soil conditions. In the following, we describe a menu of adaptation strategies that have the potential to maintain and increase food production over the coming decades.
All of these approaches have been tested and yield positive results. However, their implementation will require decisions by governments as well as substantial investment. Such investment is most challenging in under-developed nations, where policies will need to be conducive to promoting family-run farms. Nevertheless, consideration of these approaches must increase for nations to attempt to feed their citizens in the face of population increase and climate change.
Food security refers to a wide range of factors that affect the supply of food in sufficient quantities to keep populations nourished. As we have seen, a number of scenarios related to climate change have the potential to disrupt that supply. The most drastic food supply issues derive from extreme events such as natural disasters, political unrest, as well as climate change. For example, floods and heat waves can drastically reduce the supply of food. Currently, estimates suggest that more than 1 billion people are estimated to lack sufficient dietary energy availability, and at least twice that number suffer micronutrient deficiencies.
A key component of availability of food is the price. Recently, in the US, food prices have increased significantly. This situation has been part of a global increase in the price of food, driven in part by weather events such as the prolonged drought in Australia, and floods in the Midwest and parts of Southeast Asia. This increase has led to demonstrations and rioting in more than 14 countries. The situation is particularly bad in Haiti, where citizens have eaten mud cakes just to survive, and hunger and starvation constantly have the potential to cause anarchy.
Food prices are controlled by many other factors besides climate change; for example, currency fluctuations, import and export policy, energy costs, as well as population growth. In the future, climate change and population growth are likely to cause major food security problems. Solutions to the problem must include a combination of concerted efforts to protect people all over the world who are facing hunger and starvation, changes to agricultural practices discussed previously, as well as action to reduce greenhouse gases.
As we have seen, biofuels have the potential to cause major food security issues. Ethanol produced from corn requires a lot of crops compared to other biofuel sources. In fact, one gas tank’s worth of ethanol fuel requires the same amount of corn that could feed one person for an entire year. Thus, competition for corn between food suppliers and energy producers has the potential to increase corn process drastically and potentially lead to shortages. This has led many to question the ethics of using corn as a source of fuel. Subsidies for biofuel production by farmers in the US and Europe are therefore extremely controversial.
One of the most difficult problems facing agriculture is that the industry itself is a major contributor to greenhouse gases. A considerable amount of energy is used up producing fertilizers, running the factories that convert crops into packaged food, and transporting the food to market. Currently, agricultural activities around the world are responsible for 12% of greenhouse gas emissions, actually more like 30% if the impact of deforestation and the production of nitrogen fertilizers are included. Because more fertilizer and more deforestation will be required to feed the world’s growing populations, this number is set to increase.
So, where is all of this headed? It's very hard to predict. Food, like water, is so vital for human livelihood, and climate change is almost certain to cause shortages in some regions of the planet. The likelihood, therefore, is for intensifying conflict in these parts of the globe in the future.
In this module, you should have mastered the following concepts:
You should have read the contents of this module carefully, completed and submitted any labs, the Yellowdig Entry and Reply and taken the Module Quiz. If you have not done so already, please do so before moving on to the next module. Incomplete assignments will negatively impact your final grade.
Superstorm Sandy came ashore in New Jersey on October 29th, 2012. The storm caused 109 fatalities in the US and more than $71 billion in damage and lost business income. The destruction caused by a combination of wind, flooding, and storm surge was focused in New Jersey and New York City. An extremely large area hundreds of kilometers across received extensive damage. Half of the city of Hoboken flooded, a 50-ft segment of the Atlantic City boardwalk washed away, and hundreds of beachfront properties all over the New Jersey shore were damaged or destroyed. In New York City, the East River flooded its banks in lower Manhattan and the subway system suffered its worst flooding in history. A storm surge of almost 14ft flooded Battery Park and a 16ft surge hit Staten Island, where damage and casualties were particularly severe. However, lost in the discussion about the causes and impacts of the storm is this: scientists have known for a long time that the New York City region is very vulnerable to storm damage.
Sandy has reinvigorated the debate about hurricanes and climate change (see Module 3). Remember, climate change is supposed to result in larger storms, not necessarily more frequent storms, although there is still a lack of agreement among scientists in this forecast. The basic problem is that unlike temperature and precipitation, where we have massive amounts of data to see trends, run models and make projections, we have very few large storms to accurately forecast. Nevertheless, the amount of heat in the North Atlantic Ocean in October 2012, is a strong, if not irrefutable argument for a relationship between Sandy and climate change. And the interaction between a hurricane very late in the season for the northeast coast and a cold front more typical of that time of year was what made Sandy so large and so deadly, as well as steering indirectly towards the coast. The discussion about Sandy and climate change is bound to continue for a long time. However, we know for certain that global sea level is 1.5 ft higher than when New York City was hit by a storm of comparable size in 1821, the massive Norfolk and Long Island Hurricane, and that extra foot plus made a huge difference in Lower Manhattan.
Ten percent of the world's population, or approximately 600 million people, live on land that is within 10 meters of sea level. This low elevation coastal zone includes some of the world's most populous cities besides New York, including London, Miami, Calcutta, Tokyo, and Cairo. In the US, the situation is most dire in New Orleans where a large portion of the city lies below sea level making the city highly vulnerable to storms such as Hurricane Katrina. New Orleans is in an especially precarious positions because the land on which it is built is subsiding rapidly, at a rate far faster than modern sea level rise, as a result of the development of marshlands, and major decisions will need to be made in the future on whether to keep investing in infrastructure to keep the seas out or whether to retreat from low-lying areas. Such investment has already been made in the Netherlands, where a massive system of flood protection has been developed to keep the oceans at bay. However, tiny island nations in the Pacific and Indian Oceans that are within a meter's elevation of sea level do not have the resources available to protect the land from rising seas, and these nations are grappling with the distinct possibility that they will be completely submerged by the middle part of the century.
Average global sea level has risen about 17 cm since 1900 with considerable variability from place to place. The average global rate of sea level rise is about 3 mm per year, but in parts of the western Pacific, this rate is closer to 1 cm per year. New techniques enable extremely precise measurements of sea level, and this has allowed geoscientists to determine the vulnerability of different places to future sea level rise. As we have observed in Module 2, the large ice sheets of the world are melting at rapid rates. In fact, the Intergovernmental Panel on Climate Change predicts that sea level will rise by up to an additional 0.6 m by the year 2100 (see adjacent plot), although a great deal of uncertainty is associated with the unpredictability of ice sheet behavior combined with different emissions scenarios and warming trends.
In fact, if we go back 125,000 years before present to the last interglacial period, much of Greenland was ice-free and sea level was 4-6 meters above present. However, this amount is dwarfed by the sea level changes that have taken place in deep geologic time. For example, about 90 million years ago, sea level was hundreds of meters higher than today, and the ocean extended across the North American continent connecting the Gulf of Mexico to the Arctic Ocean.
The stakes are huge. Recent data suggests that the melting of the Greenland Ice sheet is accelerating. Imagine the consequences of this process should it continue for decades to come. Just one number should make the point clearly. A seawall which is being discussed to protect New York City and parts of New Jersey from future Sandys will cost about $23 billion. Imagine what it would cost to protect Boston, Philadelphia, Washington DC, Miami, Houston, Los Angeles, and San Francisco!
On completing this module, students are expected to be able to:
After completing this module, students should be able to answer the following questions:
Below is an overview of your assignments for this module. The list is intended to prepare you for the module and help you to plan your time.
Action | Assignment |
---|---|
To Do |
|
For most people, sea level rise is caused by melting ice sheets. It is so easy to visualize a glacier melting into the ocean. As it turns out, an equally important factor is the expansion of seawater as it warms. In this section, we explore these different mechanisms in some detail.
How are absolute changes in sea level caused? As we have seen, the most direct way is through the growth and melting of the major ice sheets as discussed in the following video.
Video: NASA: A Tour of the Cryosphere 2009 (5:12)
The following videos describe melting of ice sheets on Greenland and Antarctica.
This process has been active over much of geologic time, all except for the very warmest time periods when there were no polar ice sheets. If we were to melt all of the ice on Antarctica and Greenland, we would see a sea level rise of almost 70 meters (Greenland would cause about 6 m of sea level rise, Antarctica about 60 m). This would take melting of the relatively stable interior of the ice sheets which will take thousands of years to occur if modern warming rates continue unabated. However, there is much we do not understand about the behavior of the more dynamic areas of the ice sheets closer to the edges and this imparts a great deal of uncertainty to any predictions of sea level rise in the coming centuries.
New research appears all the time that shows vulnerable parts of the Antarctic ice sheet, especially its shelves. Geologists are able to use radar instruments to image the base of the ice shelf and the seabed. Ice shelves refer to places where ice overlies sea water or bedrock that is below sea level. Recently, glaciologists have found places in West Antarctica where the underlying seabed is much smoother than expected, meaning that the glacier can advance readily under the right circumstances. Moreover, some of these places are vulnerable to being heated by warm ocean currents in the future. We presented the physical evidence for ice melting in Module 2, below are before and after photos from Alaska to remind you.
The second process that is causing sea level rise on human time scales is the physical expansion of seawater as a result of temperature increase. When materials are heated, they expand and, in the case of the oceans, this causes the surface of the water to rise. This thermal mechanism can cause absolute sea level changes on the order of millimeters and centimeters per decade. It varies geographically depending on how fast the ocean is warming in individual locations and temporally depending on variations in ocean temperatures associated with climate oscillations such as El Niño . Hard as it is to imagine with all of the press attention over melting ice, but thermal expansion may actually cause more sea level rise in the 21st century.
The following video describes how satellites provide a very detailed picture of sea level change.
In the past, the significant sea-level rise was caused by major episodes of volcanism that added crust in the ocean basins and displaced seawater towards land. This happened when processes deep in the interior of the earth caused seafloor spreading rates to increase and massive eruptions of volcanic submarine plateaus away from the ridge. These processes occur on very long or geological time scales and are not a factor today.
Scientists are increasingly concerned about one massive glacier, the Thwaites glacier, on the edge of the West Antarctic Ice Sheet. Satellite images show that the Florida-sized glacier is rapidly becoming destabilized, with giant cracks criss-crossing its surface. These features suggest that collapse could happen within the next decade. A large part of the Thwaites glacier on its eastern side is held back by a giant ice shelf, which is the part of the glacier floating on the ocean surface. The ice shelf acts as a brace or a buttress, holding the giant ice sheet back and keeping it from disintegrating. The ice shelf is being heated from below by the warming ocean and scientists have recently noticed cracks and crevasses, which signify that it is thinning. Moreover, these features allow warm water to attack further inside the glacier, accelerating its melting. At some stage, the ice shelf will break up, a phenomenon that is happening in numerous places along the edge of Antarctica. This will cause the Thwaites glacier to rapidly collapse into the ocean. Estimates are that this collapse would cause a 65 cm rise in global sea levels over a few years. Remember that sea levels are currently rising by millimeters per year and have risen by 20 cm since 1900, so this sudden rise would be unprecedented and catastrophic for low-lying cities such as New York, Miami, Mumbai, and Shanghai.
Melting of the Thwaites glacier already accounts for about 10% of global sea level rise. The glacier itself is also being attacked by warm water along its base. The ice grounding line, which is where the edge of the base of the ice sheet runs across bare continental rock, is being melted by warm waters pumped under the ice shelf by tides. This zone is rugged and chaotic with broken up rock, blocks of ice and warm water mixed in. As it melts, an ever thickening stack of ice is exposed to the warm water.
The collapse of the Thwaites glacier could also lead to failure of other nearby ice sheets via a process known as Marine Ice Cliff Instability. This is where rapidly retreating ice sheets that are un-buttressed by ice shelves, expose large, unstable cliffs that readily collapse into the ocean. So the collapse of Thwaites would expose the front of adjoining glaciers to the warming ocean, leading in turn to their collapse. Models suggest that the collapse of Thwaites could ultimately lead to the collapse of much of the Western Antarctic Ice Sheet, leading to three meters of sea level rise via this process. For this reason, the Thwaites glacier is also known as the “Doomsday Glacier”. The timing of its demise is currently impossible to predict, but the signs are that it is beginning to happen and the resulting sea level rise will have a very profound impact on coastal cities around the world.
Sea level is rising at present and this rise provides some important insight into recent climate change — a warmer planet means the transfer of glacial ice to the oceans, and warming of the oceans causes an expansion of seawater. There are two main ways to directly measure recent changes in sea level — tide gauges and satellites. Tide gauges are fairly simple devices that record the height of sea level at a particular spot; the records are dominated by the daily tides (see photo below), but the data can be used to estimate the average sea level height each year.
Tide gauges measure the height of sea level relative to the ground, and if the ground is stable, then they may be recording a global sea level change; if the ground is unstable, then the tide gauge record is difficult to interpret. Sites that are near the boundaries of tectonic plates are geologically unstable, so their tide gauge records are not reliable. Other sites located near the locations of formerly large ice sheets are also unstable in the sense that the land there is rising due to the removal of the ice from the last glacial age. This ice weighed a great deal (it was around 5 km thick) and its removal has triggered a very slow, steady rise of the crust — it’s like pushing down on a block of wood floating in the water and then removing your hand — the block rises up. Extending this analogy to the Earth, the block of wood is the crust and the water is the mantle, which is a very, very sluggish fluid, thus the crust does not spring up quickly, but takes several tens of thousands of years. If the land is rising up, then it would appear that sea level is falling, and indeed, this is seen in many tide gauge records from polar regions.
This tide gauge record comes from Churchill, Ontario (Canada), on the edge of Hudson’s Bay; here, you can easily see the downward trend, meaning that sea level appears to be falling, but this is just in a relative sense. The story here is that the crust is rising, so sea level appears to be falling. The crust is rising in response to the melting of an ice sheet about 5 km thick that was present during the last ice age.
Because of the problem of crustal stability, tide gauge records have to be selected very carefully if we want to learn something about how global sea level is changing. Not surprisingly, this work has been done (Douglas, 1997), and a set of 23 tide gauge records have been selected, shown in the figure below.
The satellite data are interesting to consider — how can a satellite measure sea level changes? The answer is surprisingly simple at one level — it just measures the distance from the satellite to the sea level surface. The figure below shows how the system works, using a radar beam that bounces off the water surface and returns to the satellite. The height of sea level is actually the difference between the distance between the satellite and the sea surface and something called the reference ellipsoid, which is a kind of smoothed approximation of the Earth’s shape.
The satellite is orbiting the Earth at a height of around 1300 km and can cover the surface of the oceans in 10 days, making several hundreds of thousands of measurements during that time, and keeping very close track of where it is located relative to some control points on the Earth. As a result, this system can measure the mean sea level to within a few millimeters — quite a remarkable achievement.
These observations show that sea level is undoubtedly rising, but in a somewhat irregular pattern, and lead us to ask why. What is causing this rise in sea level? There are two main explanations for this rise. One is a simple expansion of seawater as it warms and becomes less salty — this is known as the steric effect. As seawater expands, it takes up a greater volume, and so the sea surface elevations rises. If the sea water warms by 1 °C, the steric sea level rise would be about 7 cm. An additional small increase comes from a slight freshening of the oceans — salty water is denser than fresh water. This steric effect is believed to account for the majority of the observed sea level rise. The other main reason for sea level rise is from the melting of glaciers, which transfers water previously stored above sea level back to the oceans. As we have seen, mountain glaciers around the globe are melting and so are the big ice sheets of Greenland and Antarctica.
The effect of warming and freshening of the oceans is believed to be behind the annual cycle seen in the above satellite data. As you might expect, the warming and freshening do not occur uniformly across the globe, so the global pattern of sea level rise and fall is somewhat complicated, and this goes a long way toward helping us understand why the tide gauge records (from the “stable” sites) show considerable variations.
Looking further back, we must rely on the geologic record. Benthic foraminiferal species that live in marshes have narrow depth ranges. Foraminifera found in cores of tidal marsh sediments can yield the depth at which they live and dated to provide an age. In this way, geologists have determined changes in sea level of the North Carolina coastal zone back to 0 AD, before tidal gauges were employed. The curve shows a sharp increase in the rate of sea level rise in the middle of the 19th century, coincident with the Industrial Revolution.
Corals provide a reliable way to determine sea level in the past since they grow within about 5 meters of the sea surface (see Module 7). If we find a coral submerged at 200 meters (see below), then sea level must have been 195-205 meters below current levels. In this way, sea level for the last 22 thousand years has been developed using the depths of corals, dated using carbon-14 or uranium isotopes, below current sea level.
Besides coral, there are other indications of sea level rise. Fjords are drowned glacial valleys, and estuaries, such as the Chesapeake Bay, are drowned river valleys. Both of these morphological features indicate that sea level has risen recently.
We can see that sea level changes much more dramatically on the timescale of the ice ages. The last ice age peaked around 20,000 years (20 kyr) ago, and at that time, a great deal of water from the oceans was locked away in large ice sheets, so sea level was about 120 m lower than it is today! This is a colossal change and it would have dramatically changed the coastlines of the world. It is interesting to consider the rate of sea level change associated with this transition out of the last ice age. Sea level rose by about 120 meters in about 10 kyr, giving us a rate of 12 mm/yr, which is about four times faster than the average rate of sea level rise today.
Imagine what the world looked like 20,000 years ago when sea level was so much lower. To help, look at the figure below, which shows the elevation both above and below current sea level. The modern shoreline is easy to see — it is where the green colors begin. The black line offshore shows about where the shoreline was when sea level was 120 meters lower than today.
Now, more realistically, let's imagine what will happen to the coastal zone in coming centuries if sea level rise accelerates. We will observe flood maps in the following lab.
In fact, changes in the height of the ocean as a result of melting ice or warming seas (absolute or eustatic sea level change) only tell part of the story. The level of the ocean can also change because the underlying land is rising or falling with respect to the ocean surface. Such relative sea level change usually affects a local or regional area, and in numerous cases, is actually outpacing the rate of sea level change.
Relative sea level changes can be caused by plate tectonic forces. For example, the Great Japan earthquake on March 11, 2011, caused part of the island of Honshu to rise up by nearly three meters. The sea floor can also drop when huge amounts of sediment are deposited by rivers and deltas. The weight of the sediment depresses the underlying crust, often at a faster rate than the sediment is being deposited. This process is happening along the east coast of the US today in places along the coasts of Maryland, North Carolina and Georgia, and the Gulf Coast, especially in the Mississippi Delta region. In parts of Florida where large amounts of water have been pumped out of aquifers for water supply, the land is also subsiding rapidly.
One of the most drastic examples of relative sea level change is around New Orleans, where sea level is rising with respect to the land at alarming rates. The map below left shows that in parts of the city, the land surface is subsiding at nearly 3 cm per year! There are two additional alarming parts of this story. The first is that the rates actually may have been faster in the past and have the potential to increase again; the second is that parts of the city already lie 4 meters below sea level (see map below left). The causes of the New Orleans subsidence are partially natural. The city sits on delta sediments that are accumulating very rapidly and causing the underlying crust to subside due to their weight. However, humans are also responsible. Draining wetlands, over pumping aquifers and diverting floodwaters from the Mississippi River are all contributing to rapid subsidence.
The New Orleans subsidence map is an excellent example of relative sea level change. However, the fact that the land the city is built upon is subsiding so rapidly makes it more prone than most areas to absolute sea level rise as a result of warming oceans and melting ice sheet. Late in this module, we will observe some of the changes the city has made to combat both sources of sea level rise. However, the rates of subsidence are so rapid that some scientists are advising retreat from the coast as the wisest option from an economic and safety point of view.
Changing sea levels around New Orleans and other regions are at least partially part of natural changes that have been occurring for hundreds of millions of years; absolute and relative sea level has been rising and falling continuously through geologic time. As the coastline has moved landward with sea level rise (a change called transgression), thick marine deposits are laid down, beach sands over coastal marsh sediments, deeper shelf deposits over beach sands, and slope sediments over shelf deposits. The marine deposition is generally considered to be continuous. Conversely, as the coastline has moved back seaward (a change called regression), shelf deposits are superimposed on slope deposits and marsh deposits are laid down over beach sands. Rock sequences on land show these characteristic sequences, for example, coals deposited in swamps, overlain by beach and shelf sands.
Once this regression reaches a certain point, much of the shelf becomes subaerial and characterized by sporadic deposition and erosion. This erosion makes a surface that is called an unconformity, which is essentially a gap in time. The unconformities end up bounding a package of strata that is deposited by one individual sea level rise and fall. Such packages are known as sequences. The multiple sea level rises and falls have produced sequence upon sequence underneath the continental shelf and slope. These sequences hold within them the history of relative and absolute sea level change.
Geophysicists can image the subsurface using a technique called reflection seismology. Basically, elastic waves from ships are produced by airguns, these waves travel down through the ocean and reflect back off the seafloor and the layers underneath it. It turns out that the unconformities between the sequences reflect a lot more energy than the conformable layers within the sequences, so the sequences can be mapped precisely using reflection seismology. Individual sequences can be age dated using microfossils (see Module 1) and in this way, the history of sea level can be determined. This technique is known as sequence stratigraphy.
So, the big question is how we know whether sequence stratigraphy reflects relative sea-level change caused by changes in subsidence, sedimentation, and sediment loading, as we discussed in the previous section, or absolute or eustatic sea level changes? The way individual sequences can be confirmed as eustatic is when they are observed on different continental margins (it would be difficult to imagine how regional subsidence, sedimentation patterns, and sediment loading would impact margins in a different part of the world at the same time).
As it turns out, sea level curves such as the ExxonMobil curve have a hierarchy of cycles. Very short frequency cycles with frequencies lasting a few thousand years are superimposed on cycles with millions of year frequencies, and these are superimposed on long frequency cycles lasting 100s of millions of years. The current chart has five orders of cycles, all superimposed. The longer order cycles are almost certainly eustatic in origin. The shorter order cycles which cannot unequivocally be matched between continents are likely relative sea level changes. The big argument is whether the cycles in between represent absolute or relative sea level changes. Long order changes likely represent slow processes such as changes in seafloor spreading, whereas middle order cycles likely represent faster sea level changes associated with melting of glaciers.
We have been there before. There is plentiful evidence that the sea flooded the interior of continents multiple times in Earth history. These marine incursions alternated with times when the ocean receded to below the level of the current continental shelf. The sedimentary rocks that are found in the interiors of continents contain the fossilized remains of marine organisms such as clams, oysters, and corals, demonstrating that they were deposited below the sea. Geologists have known about these continent-scale sea level fluctuations for a long time. In the 1950s and 1960s, the brilliant geologist Larry Sloss of Northwestern University mapped out marine units across from one side of North America to the other and showed that they lasted many millions of years.
We now know that some of the highest sea levels took place at times when the climate was warm and there were no ice sheets covering the poles. Likewise, times when global sea level very low corresponded to cold intervals when the ice sheets were extensive. As we established in Module 5, one of the main drivers of climate during these times was the amount of CO2 in the atmosphere.
One period with high CO2 levels, the late part of the Cretaceous about 90 million years ago, was a time when the high latitudes were too balmy to maintain ice sheets. There is much evidence for late Cretaceous warmth, none less compelling than the discovery of fossils of palm trees and reptiles in northerly places such as Alaska and Ellesmere Island in northern Canada. Beginning in the late Eocene (about 40 million years ago) ice began to accumulate on Antarctica and in the late Pliocene (about 3 million years ago) the northern hemisphere ice sheets began to grow. These changes in the psychrosphere (the technical name for the major ice sheets) led to substantial changes in sea level. In fact, some estimates of sea level in the middle Cretaceous are 170 meters higher than at present and those in the Eocene are 100-150 meters higher (see curve below).
We have seen that if we melt all the ice on the Antarctic and Greenland ice sheets, that sea level will rise by about 70 meters. So how come sea level in the Cretaceous and Eocene was almost double this amount higher than at present?
The answer to this question is not just a climatic one. The Cretaceous, in particular, was a time of very active volcanism. Thick deposits of volcanic ash are found in Cretaceous marine sediment sequences, suggesting some gargantuan volcanic eruptions. Moreover, there is substantial evidence that volcanic activity was also more intense in the ocean basins themselves. Numerous large plateaus, known as large igneous provinces, or LIPS for short, date back to the Cretaceous. One of these, the Ontong Java Plateau in the western Pacific, is an area as large as Alaska and up to 30 km thick. In total, the Ontong Java eruptions lasted a few millions of years and spewed out 100 million cubic kilometers of lava. The period from 125 to 90 million years ago was characterized by very active volcanism. In addition to the Ontong Java Plateau, there are numerous other LIPS in the Pacific and Indian Oceans, as well as the Caribbean. In fact, much of the western part of the Pacific Ocean is characterized by a topographically elevated crust of Cretaceous age, known as the Darwin Rise, that was produced during times of very active volcanism. The Darwin rise is peppered with volcanic islands and seamounts.
There is evidence that the largest of the LIP eruptions including the Ontong Java Plateau and the Caribbean involved the outgassing of so much CO2 that they caused abrupt global warming events somewhat similar to the PETM. Because the ocean was already quite warm at these times, and warm water can hold less oxygen than cold water (as we saw in the section on hypoxia in Module 6), the Cretaceous LIP eruptions are thought to have triggered ocean wide hypoxic or anoxic events that led to the deposition of sediments called black shales. As it turns out, these black shales are thought to have sourced large amounts of the world’s petroleum, and the hypoxic events had major evolutionary consequences.
Emplacement of LIPs would have displaced a lot of seawater in the oceans. Thus, this volcanism can explain why sea level was significantly higher than the modern ocean, even with the complete melting of the ice sheets. The other potential reason is that the Cretaceous also saw higher rates of volcanism at mid-ocean ridges, which would also have displaced seawater and elevated sea level.
In this section we explore current and future challenges posed by sea level rise. Our example case studies come from the developed and developing world. We will see that there will be a very different range of options in countries with ample resources from those without.
Let us assume that by 2100 sea level will rise by amounts similar to the upper bounds of the 2007 IPCC estimates, roughly 60 cm. At the same time, let's assume that subsidence rates on the New Orleans region continue at current rates between 2 and 28 mm/year. The result would be between 0.8 and 3.1m of sea level rise relative to the current land surface.
Even if sea level rise does not inundate low-lying coastal regions, it will make them more prone to flooding during storms and prolonged periods of heavy rainfall. In fact, this is the predominant fear in places like Bangladesh and the Mississippi Delta region near New Orleans. Let's consider the area around New Orleans where hurricanes are a constant threat. The extensive damage caused by Hurricane Katrina did not arise from wind or rain, rather the massive storm surge, the giant wall of water that was pushed up onto the land during the hurricane. This storm surge was over 9 meters to the northeast of the city and peaked at about 5 meters within the city limits. This water either overtopped the levees and flood walls that were built to protect low-lying areas of the city, or, more frequently, combined with the wave action to topple levees from the base upwards. In the aftermath of Katrina, the Army Corps of Engineers has rebuilt the levee and floodwall system in New Orleans, and upgraded pump stations, to defend the city from a similar storm surge in the future. The new system offers multiple lines of defense beginning outside of the perimeter of the city. The massive Inner Harbor Navigation Canal Lake Borgne Surge Barrier is the largest flooding structure in the US.
The following videos describe how subsidence is leading to sea level rise in the Mississippi Delta region and how engineering is being used to combat it.
The design of all of the New Orleans flood protection is based on the elevation of a flood that occurs every 100 years. In other words, there is a 1% chance that the system will fail each year. You might ask why this risk is being taken and the structures have not been built higher. The answer is that it costs a large amount of money to build of the structures to withstand higher water levels.
Although New Orleans has received much attention after Katrina, and rightly so, many other areas are also at great risk from hurricane winds, waves and storm surge.
One of the most popular holiday spots on the East Coast, the Outer Banks (OBX) of North Carolina, is also highly susceptible to sea level rise. In fact, recent research suggests that the OBX may be experiencing some of the fastest sea level rise on the planet at least as a result of very rapid subsidence of the land. Sea levels in OBX have climbed as much as 3.7 centimeters (1.5 inches) per decade since 1980, while globally they've risen up to 1.0 cm (0.4 inches). Models suggest that sea level may rise by up to 1.6 meters (5 ft) by 2100! The Outer Banks are part of a chain of barrier islands that stretch from Florida to Massachusetts along the Atlantic seaboard. Barrier Islands are delicate sand bodies that are generally moving towards the adjacent continent as sea level rises; this process occurs largely because storms erode sand from the seaward side of the island and deposit it on the landward side. The OBX is moving at a rate that is quite alarming from a development point of view, a point illustrated best by the famous Cape Hatteras lighthouse. The lighthouse was built in 1870 some 1,500 feet from the ocean. By 1970, the lighthouse was just 120 feet from the ocean, and its fate was not very uncertain. Fortunately, the lighthouse was moved some 2900 feet inland. At the rate the OBX are shifting, all development is threatened. However, development modifies the response of the coastal environment to erosion from storms often increasing erosion rates along the undeveloped parts of the coastline.
The OBX are extremely vulnerable to storms as a result of their exposed position in the open Atlantic Ocean, which has led to a greater number of hurricanes. As we have seen, experts predict there will be fewer more powerful storms in the future. The impact of these storms will be amplified by the continuing sea level rise and a powerful hurricane could have an extremely destructive impact on the fragile barrier islands.
One of the most vulnerable places for sea level rise in the US if not the world is Miami and more specifically Miami Beach, which lies on a fragile barrier island along the coast. Miami has a booming tourism and real estate industry, and Miami Beach and other beachside communities house some of the most luxurious developments in the country. According to Zillow, Miami contains 26 percent of homes at risk of rising seas. Sea level in the area is rising at a rapid rate, with six inches of rise expected by 2030 and up to six feet by 2100. This is threatening everything. Six feet of rise would leave much of Miami Beach (and the Greater Miami area) underwater. Even six inches could be more damaging during hurricanes with storms surge pushing further inland and causing more destruction, but six feet would be truly devastating during these events. Left unchecked, the city could lose its thriving economy and be faced with massive damages from the rising waters within a couple of decades.
The last decade in Miami Beach has seen a dramatic increase in “sunny day” flooding events, which generally occur at monthly high tides, and some more drastic “king tide” flooding during the highest tides of the year. All of these events have led to flooded streets and parking lots. Rising seas also threaten Miami’s drinking water as salt water is beginning to seep into the city’s aquifer, threatening its water supply (the salt water incursion issue discussed in Module 8). Sea level rise also has the potential to destroy many of the city’s septic systems.
Faced with these threats, the city managers have taken on extremely proactive measures to stave off the rising seas and give the city a future. The city has raised taxes and employed some of the top coastal engineers to design systems to hold back the rising ocean. These systems include elevating the roads and building walls to protect key structures. Powerful pumps are being installed to drain water away during the king and sunny day high tides. Valves are being placed in drinking water systems to keep salt water out of the drinking water supply.
All of these measures are designed to ensure that the city stays vibrant well into the future even when the rate of sea level rise increases.
Next, let's travel to Venice. The average rate of land subsidence is about 1mm/year, largely due to the consolidation of the sediment and pumping of aquifers. This low amount is at odds with reports of crumbling building foundations and regular flooding of city monuments. What is going on?
Venice has a long legacy of devastating flooding and the severe threat of sea level rise rendering the historic city uninhabitable, and, potentially destroying art and architectural treasures and spoiling a major tourism industry. Today, the average elevation of Venice is close to sea level. The city lies in a location with a moderate (certainly not large) tidal range of less than one meter. Yet, high tides regularly cause flooding. In fact, flooding driven by high tides submerges the lowest 14 percent of the city four times a year. The situation is where it is today in part because the land the city is built upon is rapidly subsiding due largely to the removal of groundwater. The city subsided 12 centimeters in the two decades before 1970. The old buildings are constructed on wooden pilings that sink into the mud, making them even more susceptible to subsidence. With so much at stake, the city is fighting back. Construction is well underway in the MOSE (Modulo Sperimentale Elettromeccanico, Experimental Electromechanical Module) project to construct giant barricades to keep floodwater from the Adriatic Sea out of the city. The flood control system will consist of 78 giant barriers that will rise out of the water when floodwaters threaten and prevent water from entering the three entry points to the lagoon. The barriers are like giant airbags inflated by air that fills in response to the water level. Once the threat passes, the barriers will fill with water and be lowered back to the seabed. The project has suffered numerous delays and there is growing frustration that it will be too little too late when it is completed, hopefully in 2021. Meanwhile, the floods continue.
The Netherlands is an extremely low-lying country, about a quarter of which lies below sea level and half of which is within a meter of sea level. The city of Rotterdam, Europe’s busiest port lies below sea level. Two-thirds of the country is vulnerable to flooding from the sea and from rivers. Even before the threat of sea level rise came on the horizon, the country had already invested a great deal in engineering projects to keep the sea at bay. Like New Orleans and Katrina, the Dutch had their own “wake-up” call in 1953 when a high-tide storm breached levees, flooded a massive area and killed 1900 people. The country responded by developing a major flood control enterprise, the most extensive storm protection system in the world. Currently, the land is protected by a massive system of human-constructed levees, storm-surge barriers, dunes, canals, pumping stations and floodgates that are emplaced when tides are abnormally high or storms threaten. The system is designed to survive flood levels that occur only every 10,000 years (note the new levee system in New Orleans is designed to withstand the 100 year flood), however with rising sea levels and the potential for increased storm activity, the Dutch are looking to increase their efforts to keep the sea at bay. They plan to invest over $2 billion per year for the next 100 years to change drainage patterns to relieve the current strain on certain canals, in part by flooding land that is dry today. The scheme also proposed to reclaim more land from the ocean and push the shoreline out to sea. Moreover, the Dutch are even mulling building construction of floating cities. With decades worth of engineering experience, countries that are threatened by rising seas are looking to the Dutch for advice and innovation.
The following video provides an overview of engineering in the Netherlands designed to combat sea level rise.
Finally, let's visit the country of Bangladesh, where a large swath of the populous coastal lies very close to sea level. In fact, a one-meter rise in sea level would inundate 30,000 km2 and displace 20 million people. This area is already extremely prone to flooding from cyclones, and this danger will increase with sea level rise.
The following video provides a stark picture of flooding in Bangladesh in 2004.
A dire picture also emerges in small island nations in the western Pacific, including Kiribati, Tuvalu and the Marshall Islands, where sea level rise over the next century could cause these nations to completely disappear. In fact, the President of the nation of Kiribati has publicly stated that the 100,000 citizens in his country may need to be relocated as a result of climate change and sea level rise. In Tuvalu, large tides occurring in January, February, and March, and August, September, and October, known as King Tides, flood some areas of the main island and capital city Funafuti. Before sea level rise was a problem, these tides did not cause extensive flooding. Now, they are responsible for salinization of the soil which makes it infertile and for spreading diseases because of the leaking septic tanks, as well as loss of inhabitable land.
Like in the Torres Straits, inhabitants on many low-level islands in the Pacific are building sea walls to keep the rising seas out. However, in many places, the walls are built out of coral, often obtained from fragile offshore reefs that themselves offer some protection from the rising seas.
One hundred and fifty islands lie between Cape York Australia and Papua New Guinea, 17 of the islands are permanently inhabited by about 7000 people belonging to indigenous populations related to the Aboriginal peoples of mainland Australia. With an average elevation close to sea level and a large tidal range (spring tide is about 10 feet), the islands are very prone to the effects of sea level rise. Trends suggest that the island may be characterized by a slightly higher rate of sea level rise than the global average. Even if the islands remain emergent, as sea level rises, they will still be far more prone to the impact of extreme events such as tropical cyclones as well as regular high tides. With very limited resources, the islands have had a piecemeal response to rising sea levels by building sea walls along the low-lying area. Unfortunately, these walls are in dire need of repair and reinforcement. Torres Islanders have pleaded with the Australian government to help rebuild and fortify the sea walls to protect the vulnerable areas. In August 2011, the government appropriated $22 million to build the wall, however, they backtracked four months later and, thus, the plight of the islanders remains in jeopardy.
So, a final word. As you can imagine, Hurricane Sandy has reinvigorated the call for flood barriers and sea walls at the entrance to the New York City harbor. Such structures are very expensive, but would have saved an enormous amount of destruction from the storm. This is not the only city where flood barriers will be needed in the future, as the video below describes graphically.
Download this lab as a Word document: Lab 10: Impact of Sea Level Rise on Coastal Communities [339] (Please download required files below.)
There are two parts of this lab. In the first, you will look at recent trends in sea level from tidal gauge data going back to about 1940. This will allow you to determine the places where sea level is rising the fastest. In the second part of the lab, you will be looking at future sea level rise projections for certain areas. The first part of the lab is in Google Earth; the second part is in a web browser (the Google Earth files for this type of analysis don’t work well yet).
PSMSL Tide gauge file [341]
In this part of the lab, you will look at tide gauge data showing relative sea level rise data back to about 1940. The goal will be to determine trends from rather noisy data, determine places where relative sea level is rising faster than others, and the reason for the rapid rise. In the practice lab, we will focus on the West Coast of the US.
Load the PSMSL Tide gauge file [341] in Google Earth. The file shows tide gauge data from around the world which will allow you to explore the rates of sea level rise. The dots show stations organized by the last reported year. Click on stations, and you will see a PSMSL ID number; click on that, and you will get tidal gauge data in mm (for several locations several dots appear; make sure you click on one of the dark green dots).
A. Rising
B. Falling
A. Over a meter
B. Over 0.25 m
C. Under 0.25 m
A. Rising
B. Falling
A. About 0.5 m
B. About 0.3 m
C. About 0.1 m
A. Isostatic rebound (removal of ice)
B. Subsidence
C. Uplift due to tectonic activity
Prediction of the extent of flooding that results from sea level rise is much simpler than predicting the absolute amount of sea level rise that will occur over coming decades. Flooding predictions are based on digital elevation maps that have great accuracy and resolution. The NOAA sea level rise and coastal flooding tool allows you to look at areas in detail and make predictions about the future under higher seas. At the top, you can enter an address to look closely at an area. For the practice, we will look at Tampa, FL, so enter this in the search window. Note: it can take a while for a clear image to come into view. Remember 1000 mm is a meter.
Go to NOAA Sea Level Rise Viewer [342] and click on Get Started. You will see a map focused on the US which is where we will be working. On the bottom left, please make sure the elevation scale is in meters, not feet. We will look at three different views: (1) sea level rise which allows you to see how the area floods as you move the slider up. (2) Flood frequency which shows the areas that currently flood frequently; and (3) vulnerability, which is a comprehensive assessment on how vulnerable certain regions are to sea level rise (based on elevation as well as population density and demographics such as the percentage of people living under the poverty line).
Using these three maps answer the following questions:
A. Low elevation
B. Demographic factors
A. Low elevation
B. Demographic factors (poorer residents)
Now go back out to St Petersburg. Compare downtown St. Petersburg with St. Pete Beach.
A. Downtown St. Petersburg
B. St. Pete Beach
A. Downtown St Petersburg
B. St. Pete Beach
A. Because it is close to sea level
B. Because of demographic factors (wealthy residents)
In this module, you should have mastered the following concepts:
You should have read the contents of this module carefully, completed and submitted any labs, the Yellowdig Entry and Reply and taken the Module Quiz. If you have not done so already, please do so before moving on to the next module. Incomplete assignments will negatively impact your final grade.
Lab 10: Impact of Sea Level Rise on Coast Communities
Imagine this. Martians come back to Earth 10 million years from now to study the apparent disappearance of what is described as a once thriving planet. They find textbooks with diagrams of the geological timescale and its divisions into eras, separated by the main mass extinctions. Then they take cores and sample bones in sediment deposits. What they find is the final mass extinction that took place 9.8 million years before their arrival, and 200,000 years before that, the beginning of a new era, the Anthropocene.
Now, the Martian part of this tale is fiction, but sadly the Anthropocene part is not. We have already entered that era, and paleontologists and ecologists combined believe that the sixth largest mass extinction event in Earth’s 4.6 billion year history has already begun. Rates of species loss are as high as any other time in the last 65 million years, since the time the dinosaurs went extinct. And without a revolution in our stewardship of the planet we call home, these rates are bound to accelerate in the future.
Terrestrial ecosystems are in peril. Let's make it clear, climate change is not the sole culprit here. Humans have messed with ecosystems in numerous other ways. In this module, we will see how this has happened and what is at stake in the future. We will begin by presenting how extinction happens from a theoretical view. Then, we will present one threatened representative from many of the major phyla to observe the impact of human activities on them and attempt to predict their future fate. Finally, we will observe a few “winners” of the loss of global species diversity.
On completing this module, students are expected to be able to:
After completing this module, students should be able to answer the following questions:
Below is an overview of your assignments for this module. The list is intended to prepare you for the module and help you to plan your time.
Action | Assignment | Location |
---|---|---|
To Do |
|
|
Species are amazingly resilient to environmental change. We know this from the groundbreaking paleontological compilation of the global diversity of genera by David Raup and Jack Sepkoski from the University of Chicago (illustrated below). The compilation, based on a tabulation of all of the occurrences of fossil genera in the paleontological literature, shows millions upon millions of years with relatively low extinction rates punctuated by five intervals with extremely high rates. These five intervals represent major mass extinctions that had a profound impact on the history of life.
The three youngest mass extinction events were caused by extraordinary perturbations to the environment, associated in two cases (the Permian-Triassic and Triassic-Jurassic) by gigantic volcanic eruptions, and in the Cretaceous-Paleogene by the impact of a 10km wide asteroid with the Earth. The exact cause of death (referred to as the killing mechanism) of species during these events is still hotly debated, however.
In the case of the Permian-Triassic event, by far the most severe mass extinction in Earth history, when up to 96% of marine and 70% of terrestrial species were eradicated, it is possible that many species were asphyxiated or poisoned by low oxygen and excessive amounts of hydrogen sulfide and CO2. For the Triassic-Jurassic event, ocean acidification is the purported cause. And in the case of the Cretaceous-Paleogene boundary, a combination of repercussions of the impact, weeks to months of darkness, global wildfires, ocean acidification, and metal poisoning, were likely responsible. The survivors of these events were either uniquely adapted to the environmental perturbations or lived in a refuge and escaped the extremity of the geologic conflagration. For example, the dinoflagellates, the same organisms that are responsible for modern red tides (Module 7), survived the Cretaceous-Paleogene event by resorting to their resting stage, while the calcareous plankton, including the coccolithophores and planktonic foraminifera (see Module 1) were nearly rendered extinct. The deep-sea benthic foraminifera also went through the Cretaceous-Paleogene relatively unscathed, as their habitat was not perturbed to nearly the same extent as the surface ocean and land.
Now, the species that went extinct during these mass extinction events were hit with environmental perturbations so severe that they could not adapt to them. By comparison, species have been able to adapt much better to more typical climate changes that have occurred during the majority of Earth history, and this explains the more modest background extinction in the compilation. Faced with environmental change, a species has three choices, in the words of biologist G. R. Coope from the University of London: adapt, move or die. The key part of any species’ survival is that it has developed tolerance to a range of environmental conditions, and if conditions change slowly enough, most species have the ability to adapt. If it cannot adapt, it can certainly move. If it cannot move any further, as we will see later in the module, it will become extinct.
In the paleontological record, species have gone extinct as a result of climate change because their populations have become so isolated that they cannot maintain a population size to remain viable. Frequently, it is likely that climate has caused the change of some other type of environmental variable that has led to the extinction. And in almost all cases, competition with other species for resources has ultimately caused the species’ demise.
Even though there are millions of extinct species to study in the paleontological record, and paleontologists have devoted a great deal of attention to unraveling the causes of their demise, we still cannot determine the exact reason for the extinction for the vast majority of them. Solving this puzzle requires constraining the environment at the time of the extinction, while proxies on which we rely for environmental information generally cannot give us sufficient detail on the range of parameters we require. More seriously, the geological record generally only provides us with time resolution at the millennial scale, while the fatal environmental changes likely occurred over decades and centuries.
As we will see in the remainder of this module, ecologists are making incredible progress understanding the causes of the demise of modern species. However, even with its shortcomings, the paleontological record offers important information about adaptation, species resilience, and their extinction risk. Times in the past when the climate warmed very rapidly, such as the Paleocene-Eocene thermal maximum (PETM) (Module 1), are of particular interest.
So, now, let's see how humans are causing the sixth largest mass extinction.
All species have a range and a niche. The range of a species is the distribution of all populations of individuals belonging to it. The niche of a species is like its job, how it obtains resources and interacts with other species. Climate change is impacting the range and the niche of all modern species. We will provide several examples of range shift, changes in the distribution of species as a result of climate change, and how it can result in species extinction. Prediction of the future niche of a species, literally whether it will keep or lose its job, is much more difficult, as we will see.
As we saw in Module 7, corals have a significant possibility of extinction in coming centuries as a direct result of anthropogenically-related climate change. With the exception of a few deep-sea species, corals are restricted to a limited upper five meters of the ocean. They need to live in this zone to receive sunlight critical for their algal symbionts. This depth range is forecast to become inhospitable to corals, as decreasing saturation levels and increased temperature and consequent bleaching will render calcification more difficult. Corals will have nowhere to go. They cannot move deeper, so if they cannot adapt, they will become extinct. Remember, adapt, move, or die.
There is ample evidence that climate is impacting the abundance and distribution of modern terrestrial ecosystems as well. Terrestrial species have and will continue to become extinct when they reach the limits of their feasible range and are unable to adapt to the new conditions.
There is overwhelming data, for example, that warming is causing a shift in the ranges of species, generally towards higher latitudes (i.e., towards the poles) and higher elevations in mountainous areas, where conditions are cooler. Species can literally be driven off the top of a mountain as they, or their predators, seek a cooler habitat. A good example of this is the Toucan, a large-billed bird that inhabits mountainous areas of Central America and is being forced to relocate to higher elevations to seek out cooler and wetter conditions. The omnivorous Toucan has become a predator of the infant Quetzal, an extremely beautiful bird that lives in its new habitat elevations. The Toucan picks the baby Quetzals out of their nest with its long beak and is causing Quetzal populations to drop.
In the higher latitudes, species can be forced to the margins of a continent or have their natural habitat or food supply profoundly altered by climate change. Cases in point are the polar bears, discussed in significant detail later in this module, and penguins. Both Adelie and Emperor penguin populations have decreased by more than 50% in Antarctica as a result of melting sea ice and their continual poleward range shift in search of a stable habitat.
Island inhabitants are extremely sensitive to extinction as a result of climate change and other impacts of humans' activity. Island species have a limited gene pool and often have a very specialized niche. Range shift is an extremely effective extinction mechanism; there is literally nowhere to move to. The widely cited case in point are birds that were endemic to islands in the Pacific, from Micronesia to Polynesia and New Zealand. Research shows that the magnitude of bird extinction as a result of human colonization of the Pacific Islands was staggering, about two-thirds of bird species went extinct between the time humans colonized the island and the first European settlers arrived. As we saw in Module 10, islands are extremely susceptible to sea level rise and loss of habitat is also a potential extinction mechanism in the future. In the past, snakes have become extinct at high rates on Mediterranean islands as sea level rose in the last 18,000 years.
With the exception of high latitudes, mountainous regions and islands, range shift is not generally an extinction mechanism on its own. More threatening to the livelihood of a species is when its niche changes due to competition, or when other human-induced agents become involved. One of the most serious impacts on species' ability to thrive is invasive species and other introduced agents such as pathogens. Later in this module, we will see that pathogens may be responsible for the demise of the Golden Toad and the decline of the honey bees.
There is almost nothing as disturbing to an ecosystem as an invasive species. And possibly the best example of the havoc an invasive species can wreak is the Cane Toad. Native to Central and South America, the Cane Toad is a large amphibian species that is a voracious consumer of insects. The toad was introduced to Caribbean Islands, Australia, and elsewhere in the hope that it would help control agricultural pests. The toad was brought to Australia in 1935 to help prevent the destructive cane beetle from consuming the sugar cane crop. This experiment has been a dramatic failure, and today the Cane Toad is possibly the most infamous invasive species in the world and the most hated creature in Australia.
The Cane Toad does eat voraciously. But unfortunately, it has not just fed on the cane beetle. Today it preys upon small rodents, snakes, and marsupials. One species that has become endangered as a direct result of Cane Toad predation is the Northern Quoll. Certain species of lizards and snakes have also declined. However, this is not the end of the story. The Cane Toad population has surged as a result of its ability to reproduce rapidly and the lack of a natural predator. An individual female can produce 8000-25000 eggs at once in gelatinous strings that are up to 20 meters in length. Juvenile toads grow rapidly and readily reach sexual maturity. As a result, the Cane Toad population in Australia has grown from the initial introduction of 100 individuals to over 200 million today. Because the toads are so large they can move rapidly, up to 40 kilometers per year, so they have spread over much of the northern tier of the continent.
The trait that makes the Cane Toad so potentially threatening, besides its explosive growth, is the poisonous secretions produced by its salivary glands when threatened. Even tadpoles are highly toxic. The toxins are particularly deadly to small animals, but they also threaten populations of crocodiles and turtles. The long-term impact of the Cane Toad is yet to be determined. However, the toad is a serious threat to ecosystems, and methods to control their populations have been unsuccessful. In the process of their remarkable expansion, Cane Toads have become an enemy of the Australian public, who fight them off with cricket bats and golf clubs.
Ecologists have a great deal of difficulty predicting the future. As we have already described, and as we will see throughout this module, the threats to species today are extremely complex and difficult to quantify. However, because of the importance of predictions in conservation and policy, modeling extinction risk has become a part of the toolkit of ecologists. A wide array of data and methods are applied. However, the key aspect of a model of extinction risk is the population size and the area and the rate with which those parameters are likely to change with projected climate change. Extinction risk models often use climate and environmental projections (temperature drought) from the different emission scenarios (Module 5). Forecasts of extinction often cause great alarm. An infamous case was the 2004 prediction by conservation biologist Chris Thomas of the University of York that up to a million species would become extinct by 2050. The paper in which the models were described has become one of the most scrutinized publications in ecology. The consequent wrangling over technique and interpretation has exposed just how problematic forecasts are.
Without some kind of assessment of risk, however, it is not possible to develop a strategic plan for species conservation backed up by sound environmental policy. The International Union for Conservation of Nature (IUCN) was founded in 1948 with the goal of assessing the magnitude of the extinction threat of modern species and the measures that are being taken towards their conservation. The organization collects the latest ecological data on threatened species and evaluates their status and extinction risk. The IUCN compiles an up-to-date Red List of Threatened Species [349] that summarizes the current status using a set of categories illustrated in the figure below.
The key categories in the Red List system are Critically Endangered, Endangered, and Vulnerable, and each of these categories is defined by very specific information on changes in the abundance and distribution of the species. For example, here are the requirements for a species to be considered Critically Endangered, quoted directly from the official Red List criteria (this information is provided to give you an appreciation of the metrics, you are not required to remember the details!).
A taxon is Critically Endangered when the best available evidence indicates that it meets any of the following criteria (A to E), and it is therefore considered to be facing an extremely high risk of extinction in the wild:
A. Reduction in population size based on any of the following:
1. An observed, estimated, inferred or suspected population size reduction of ≥90% over the last 10 years or three generations, whichever is the longer, where the causes of the reduction are clearly reversible AND understood AND ceased, based on (and specifying) any of the following:
(a) direct observation
(b) an index of abundance appropriate to the taxon
(c) a decline in area of occupancy, the extent of occurrence and/or quality of habitat
(d) actual or potential levels of exploitation
(e) the effects of introduced taxa, hybridization, pathogens, pollutants, competitors or parasites.
2. An observed, estimated, inferred or suspected population size reduction of ≥80% over the last 10 years or three generations, whichever is the longer, where the reduction or its causes may not have ceased OR may not be understood OR may not be reversible, based on (and specifying) any of (a) to (e) under A1.
3. A population size reduction of ≥80%, projected or suspected to be met within the next 10 years or three generations, whichever is the longer (up to a maximum of 100 years), based on (and specifying) any of (b) to (e) under A1.
4. An observed, estimated, inferred, projected, or suspected population size reduction of ≥80% over any 10 year or three generation period, whichever is longer (up to a maximum of 100 years in the future), where the time period must include both the past and the future, and where the reduction or its causes may not have ceased OR may not be understood OR may not be reversible, based on (and specifying) any of (a) to (e) under A1.
B. Geographic range in the form of either B1 (extent of occurrence) OR B2 (area of 1). Extent of occurrence estimated to be less than 100 km2, and estimates indicating at least two of a-c:
1. Severely fragmented or known to exist at only a single location.
2. Continuing decline, observed, inferred, or projected, in any of the following:
(a) extent of occurrence
(b) area of occupancy
(c) area, extent, and/or quality of habitat
(d) number of locations or subpopulations
(e) number of mature individuals.
3. Extreme fluctuations in any of the following:
(a) extent of occurrence
(b) area of occupancy
(c) number of locations or subpopulations
(d) number of mature individuals.
4. Area of occupancy estimated to be less than 10 km2, and estimate indicating at least two of a-c:
a. Severely fragmented or known to exist at only a single location.
b. Continuing decline, observed, inferred, or projected, in any of the following:
(i) extent of occurrence
(ii) area of occupancy
(iii) area, extent, and/or quality of habitat
(iv) number of locations or subpopulations
(v) number of mature individuals.
c. Extreme fluctuations in any of the following:
(i) extent of occurrence
(ii) area of occupancy
(iii) number of locations or subpopulations
(iv) number of mature individuals.
C. Population size estimated to number fewer than 250 mature individuals and either:
1. An estimated continuing decline of at least 25% within three years or one generation, whichever is longer, (up to a maximum of 100 years in the future) OR
2. A continuing decline, observed, projected, or inferred, in numbers of mature individuals AND at least one of the following (a-b):
a. Population structure in the form of one of the following:
(i) no subpopulation estimated to contain more than 50 mature individuals,
OR
(ii) at least 90% of mature individuals in one subpopulation.
b. Extreme fluctuations in number of mature individuals.
D. Population size estimated to number fewer than 50 mature individuals.
E. Quantitative analysis showing the probability of extinction in the wild is at least 50% within 10 years or three generations, whichever is the longer (up to a maximum of 100% years).
The differences in the definitions between Critically Endangered, Endangered, and Vulnerable are largely related to the percentages in the above definitions; Endangered percentages are 50% and 70% and Vulnerable involves percentages of 10% and 30%. Species can be moved from one category to another if none of the relevant criteria has been met for a period of five years.
The other means of describing extinction risk is by defining extinction "hot spots" areas, where species are generally more threatened by human activities. A current map of hotspots is shown below. You can see that they are focused in the tropics and subtropics, as well as islands and mountains.
The rest of the module is devoted to telling the stories of species that have become extinct or are in the extinction crosshairs.
Possibly the most threatened insects on the planet are the honey bees, an organism that plays a vital role in agriculture through pollination. The first signs of trouble were noticed in the US in 2006 when beekeepers noticed losses of up to 90% of their hives. Loss of hives is not uncommon over the winter months, but the magnitude of these losses was almost unprecedented. This problem, officially termed colony collapse disorder or CCD, is accelerating in the US and has spread to other Northern Hemisphere countries. When a colony is struck by the collapse disorder, it loses its function. Typically, the queen and immature bees remain, but few other adult bees are left in the hive. There is no honey, and, oddly, no dead bees are found.
Since 2006, annual losses in the number of colonies averaged about 33 percent each year, with a third of that amount resulting from CCD. However, the losses got dramatically worse in 2012. In the last year, 50% or more of all hives have been lost to CCD and the situation is really precarious.
Honey bee pollination is a crucial part of agriculture, responsible for some $15 billion in produce each year with crops including a diverse array of berries, fruits, and vegetables. These crops represent about one of every three bites of food in a typical diet. Honey bees are not the only potential pollinators for most crops, but they are the most prolific and easy to manage in agriculture. The one crop that can’t pollinate without honey bees is almonds. Almonds in the US are grown almost exclusively in California and are a multi-billion a year industry. The almond industry is very hive intensive, with about 800,000 acres devoted to the crop, requiring at least two hives per acre.
Researchers are still completely stumped by the causes of CCD. There are a number of hypotheses that are actively being investigated, but at this time, no one has been isolated as the main culprit. Quite possibly, the causes are synergistic, acting in concert with one another. Here are the candidates:
However, new evidence is pointing to a type of pesticide called a Neonicotinoids. It is applied in really small doses and often embedded in seeds so that the plant itself kills the insects feeding on it. The problem with these pesticides is that they do not degrade rapidly, as do other varieties, and bees can carry contaminated pollen back to their hives where other bees feast on them for months. These chemicals have now been banned by the European Union.
The number of professional bee pollination services has declined over the course of the 20th century, even before the onset of CCD. The prospects are bleak for the honey bees, but it is unlikely that CCD will cause their extinction. The losses though will definitely threaten the viability of the bee pollination industry and undoubtedly the cost will be passed along to consumers in food prices.
Bird populations around the world face a series of profound threats. Climate change and associated modification of the landscape is playing a major role in isolating bird populations and may play a huge role in the future. Development and conversion of agricultural land and forest to urban areas are literally isolating bird populations on "islands", making them vulnerable to extinction. This is increasingly significant, as the largest threat to birds today is not related to climate, but caused by humans and domestic animals.
In terms of sheer numbers of casualties, domestic and feral cats pose an enormous threat to bird populations. Introduced by humans either to help keep mice and rat populations at bay or merely for companionship, cats are natural hunters of small mammals and birds. Domestication in cats has not resulted in the same level of changes in behavior and anatomy as it has in dogs, so cats have maintained their ability to hunt and to survive in the wild, and still reproduce readily with feral cat populations. Domestic and feral cats possess strong night vision, a keen sense of smell and an ability to hear high-pitched sounds. Cats’ paws and claws are designed to keep very quiet when stalking prey. Once the prey is caught, the cat typically breaks its neck with its sharp canine teeth and its powerful jaws. In a nutshell, domestic and feral cat populations are lethal hunters.
Cats’ hunting ability and the threat to birds has long been appreciated, but only recently are we gaining an understanding of the magnitude of this threat. Cameras mounted on cats, together with extensive field studies, have provided more accurate estimates of the carnage, and it is staggering. In the US alone, cats kill between 1.4–3.7 billion birds annually, with feral populations accounting for the majority of that number. Each owned cat kills an average of between 4 and 18 birds, and each feral cat between 23 and 46 birds per year. Translate these numbers globally, and the problem is clearly massive.
Although domestic and feral cats represent a major threat to wild bird livelihood, the extinction risk is hard to determine at this point. For one, in the US at least, the list of prey species is extensive with no particular species impacted significantly more than others. However, the largest threat certainly exists on islands where populations are isolated or on the mainland for species that are already endangered.
Besides owning cats and letting them roam outside, humans are responsible for bird mortality in many other ways. An area of responsibility that has received a lot of recent press is wind farms. There have been numerous reports that wind turbines caused a significant number of bird and bat deaths, either from direct impact with blades or indirectly by disruption of their flight paths. However, reports of significant mortality have been contested, and it appears that if wind turbines are sited away from migratory paths and from endangered species, they pose no greater threat than other man-made structures. In fact, because of their sheer number, collisions of birds with buildings, especially windows, pose a much larger overall threat.
For a number of reasons, the habitat of birds is under assault. Humans are clearing areas of forest at unprecedented rates for agriculture and urbanization. Urban sprawl is eating up forest and grasslands. Both of these activities are eradicating sensitive bird habitats and threatening populations. Not all habitat change is bad for birds; there are some cases where clearing has been advantageous for bird species. However, more often than not, habitat loss causes a fragmented bird population that can begin the downward spiral towards extinction.
No example epitomizes this problem better than the Northern Spotted Owl of the Pacific Northwest of the US. This beautiful species has been thrust into the battle between the logging industry and developers that seeks to remove the lush cedar, fir, hemlock, and spruce forests in which it nests and forages, and conservationists who are desperately trying to protect it. The billion-dollar logging industry has removed about 90% of the original “old-growth” forest, and the number of spotted owls has dwindled to approximately 2,000, most of them inhabiting federally-owned lands. After years of politically charged debate, the spotted owl was designated as an endangered species in 1990 and lumber companies were required to leave 40% of the old growth forests within 2 km of all spotted owl nests.
The spotted owl controversy is an interesting ethical debate. Should saving an endangered species prohibit the livelihood of citizens? It has been estimated that protection of the owl has cost billions of dollars in losses to the logging industry and over 25,000 jobs. On the other hand, it has been estimated that continued logging at the rate before the regulations were imposed would have removed all forest in about thirty years and forced all the mills to close anyway. The debate will certainly continue. Meanwhile, even with protection, the Northern Spotted Owl remains in rapid decline, especially in the northern part of its range. In British Columbia, there are only 20 breeding pairs left, and the species is predicted to be locally extinct in a few years.
Probably the most serious habitat loss for birds, as for many other groups, is in the Amazon. A third of all birds reside in the largest tropical rainforest in the world. Deforestation of the Amazon (as we observed in Module 10) has removed twenty percent of the original rainforest, and the rate is truly staggering, a football field area worth every minute! In the next 20 years, estimates are for a further twenty percent of the forest to be removed to create land for cattle grazing and growth of crops. Much of the clearing has been done by large commercial agribusinesses.
Besides removing nesting sites, clearing has destroyed food sources and made the remaining forest more susceptible to drought. Bulldozers and chainsaws used to clear land have caused noise pollution, fires have caused air pollution, and roads have disrupted habitats. The impact on birds and other creatures has been staggering, with 100 species now on the endangered list, ten of them critically endangered, meaning that extinction is likely.
One bird that has just been added to the list is the Rio Branco Antbird, a small bird that, as its name implies, survives on ants. This species is losing its battle with agribusinesses consuming its forest habitat in Northern Brazil and southern Guyana at a rapid clip. Based on current rates of deforestation, the bird will become extinct within twenty years if measures are not taken to preserve its habitat.
However, the risk of deforestation is not just for native birds. The area is a major winter feeding place for birds that breed in temperate latitudes in North America and South America. The impact of habitat loss on these species is hard to determine, however.
Islands represent some of the most delicate ecosystems to human destruction because migration is difficult and sometimes impossible. In fact, humans’ arrival on islands in the Pacific including Hawaii and Fiji 4000 years ago led to the extinction of about 169 species of large birds through habitat destruction and hunting. In New Zealand, one in four bird species has been wiped out since humans populated the island. The main victims are flightless birds such as the giant moa that can’t escape predators readily, as well as birds that nest on the ground or lay few eggs.
The grey colored Dodo Bird was approximately 3 feet tall, had a long hooked bill, yellow legs, and a tuft of curly feathers high on its tail (all known from drawings). It lived in New Zealand, Mauritius, and Micronesia, and was thriving when humans arrived in the 17th century, along with a brood of domestic animals. The Dodo was not accustomed to these newcomers and, because it was unable to fly, it became a very vulnerable prey. Accounts indicate that pigs, in particular, ate the bird and its eggs, and that humans cut down its forest habitat. The Dodo was completely extinct within decades of humans arrival.
Current global estimates are for ten bird species each year to go extinct unless humans make major modifications in behavior and put more resources towards conservation of at-risk species.
For the last 250 million years of their 350 million years evolutionary record, amphibians have been vulnerable to a number of natural threats. The group evolved in the Devonian and were one of the main predators on land for the first 100 million years of their range. However, after that point, the amphibians gave rise to the more versatile reptiles that rapidly diversified and became dominant. Since that time, amphibians, including frogs, toads, and salamanders, have occupied a very specialized niche. Today, this niche is becoming increasingly vulnerable to climate change and a litany of other human inflicted problems. Nearly 500 species of amphibians are threatened today.
The basis of an amphibians’ existence is the need to reproduce in water or in a moist substrate. The group lays gelatinous, unshelled eggs that readily desiccate. When the amphibians evolved from the fish, this strategy gave them the ability to search for new sources of food, namely plants on land. Thus, even though amphibians could forage away from the water, they had to return to it to reproduce. Later on, some amphibian species developed the ability to lay their eggs in moist substrates such as leaf litter. Because of their need to be near moisture for reproduction, a key amphibian habitat is rainforests and cloud forests in the tropics. These regions are thought to host the highest amphibian diversity on the planet, with the Americas hosting half of the global diversity.
Today, the amphibians are as threatened as any other group, and the numbers of potential species extinctions are at the center of global diversity loss. The causes of diversity loss and extinction are complicated and likely to be synergistic (a term that means acting in concert with one another). However, one of the key threats is periodic drought in temperate, and especially tropical, areas related to climate change.
As we have seen in Module 4, periodic drought is an integral part of climate change, even in the lush rainforests of the world. One of the most diverse amphibian habitats known is in the Monteverde Cloud Forest in Costa Rica. This is an area that, until recently, had a high amphibian species diversity, including species that are endemic to the region. This diversity has encouraged a great deal of ecological investigation. Periodic drought in Monteverde related to the ENSO cycle, and reduction in the area of moist clouds to higher elevations, has put pressure on the amphibian habitat. These dry conditions limit the reproduction of amphibians in bodies of water or leaf litter, and thus species are forced to shift their habitats to moister higher elevations. In many cases, these higher habitats are already occupied by other species. Drought may have caused dehydration, alternatively, it may have made frogs susceptible to a pathogen. Monteverde has lost 40% of its frog and toad species since 1987, including the famous Golden Toad. The Golden Toad, which was endemic to Monteverde is one of the surest identified victims of extinction. In 1987, 1500 individuals were observed in breeding pools, however, the spring was dry and few tadpoles developed. The following year, only one Golden Toad was observed in the same location, and the last sighting occurred in 1989. The species is now officially extinct. Drought appears to be the trigger of the Golden Toad extinction, but it did not cause it. The chytrid fungus, which is thought to thrive in drought, causes a fatal skin disease that eventually causes convulsions, skin loss, and death.
Climate change is not the only stressor on amphibian populations. Amphibians are more susceptible to pollution than other groups because their skin is permeable. For this reason, toxins are able to invade critical amphibian organs. Experimental and field studies suggest that amphibians are highly susceptible to common insecticides, pesticides, and herbicides such as Roundup. Chemicals cause a number of developmental problems including external deformities such as the formation of extra arms and legs, and the tendency of frogs to become hermaphroditic (the same individual bearing male and female reproductive organs), as well as damage to the central nervous system. Finally, increasing UV-B radiation is thought to cause genetic damage in amphibians. Many of the environmental changes and pollution don’t themselves cause the amphibians to become extinct, a related pathogen, fungus, or disease provides the final blow, similar to the Golden Toad. For example, drought may weaken the amphibian population, but a drought-related pathogen may be the ultimate cause of the frog extinction. For pesticides, the developmental or neurological problem does not appear to kill the amphibians, but it weakens them to a fungus that can cause a disease outbreak.
So, all of this points to a very perilous picture for the amphibians. This inconspicuous group is at ground zero of the Anthropocene mass extinction. With extinction rates over 200 times the global average and potential threatened species loss of over 25,000 times that average, coinciding with a general lack of stewardship needed to fully understand the root causes of their extinction, the amphibians will likely be the major casualty of climate and other anthropogenic activities. However, the story doesn’t stop there. And their loss may also have a direct impact on a lot of other species. Amphibians are an important part of the diet of a number of species of reptiles, birds, and mammals, as well as the main predator for a number of groups of insects, thus their extinction will have profound impacts on a broad part of the food chain. That is why many ecologists believe that the amphibians are the “canary in the coal mine” for the impact of human activities on global diversity.
Polar bears, perhaps the fiercest predator on Earth, are rapidly becoming one of the most vulnerable victims of climate change. The significant threat to populations is a direct loss of changing habitats and reduction in prey. Polar bears hunt ringed and other species of seals through the sea ice and cover large areas during their winter hunting season. In summer, when the ice cover recedes, bears fast. Bears catch their seal prey through breathing holes, waiting often for hours or days for the seal to emerge and quickly biting it on the head, pulling it out of the hole and crushing its skull. Bears hunt on what is known as annual ice (i.e., ice that forms in winter and melts in summer) rather than permanent ice because it is thinner and more easily punctured by seals. They feast on the seal fat and blubber because it is easy to digest and a good source of energy. Bears are generally opportunistic hunters, meaning they eat most types of readily available food. Rarely, they feast on beluga whales and walruses. While on land, bears will hunt reindeer, birds, and rodents; they will also eat vegetation and garbage when hungry.
Polar bears spend the autumn and winter on moving ice floes and ice pack following seal populations. They can find themselves several hundred of miles away from land. A typical bear can have a range of up to 350,000 square kilometers! They are adapted to extremely cold temperatures, with two separate layers of fur and a thick 4-inch layer of blubber underneath. Their clear outer layer of fur provides them with an excellent camouflage for hunting. A male bear can weigh over 750 kilograms.
In the spring and summer time, polar bears retreat to land or permanent ice and fast on the energy reserves they have generated during the winter. It is during this time of year that females reproduce, with a gestation period of eight months. During the winter, the female bear builds a den for delivery of up to four cubs the following spring. She also spends much of this time in hibernation. Bears have to consume very large amounts of food to maintain their lifestyle and also to make it through the long winter. Females, in particular, must maintain a high body mass to reproduce. On average, a bear must catch a seal every week or so during the summer to thrive. Females often refrain from mating if they do not have substantial food sources or the necessary fat reserves. And even if they do reproduce, cub survival rates may be reduced.
Polar bears live in groups called populations in the US, Canada, Russia, Norway, and Greenland and are rapidly becoming a victim of disappearing sea ice. As sea ice thins, bears may actually have an advantage hunting, as seals can be detected more readily through the thin cover. Some years in the late 1970s were banner years for polar bears for this reason. However, as the recession of sea ice has accelerated in the last few decades, polar bears have been forced to retreat to land earlier in the season to avoid being stranded on ice floes. A female bear needs to weigh above 200 kg to sustain herself during hibernation and pregnancy, and her cubs after birth. With insufficient body mass and energy reserves, females cease to reproduce. Data show that the average weight of female polar bears has been decreasing steadily towards this 200 kg threshold, A female reproduces every two to three years and only five times in her lifetime. Thus, when females delay reproduction for one or more years, bear populations can decline rapidly. Thus, the polar bear population is extremely vulnerable to climate change.
Polar bears have increasingly encroached on settlements surrounding their habitat in search of human garbage. This has led humans to kill dangerous bears. Moreover, climate change has also caused grizzly bears to migrate and put them in competition with polar bears for resources.
The outlook for polar bears is bleak. Today the total population is between 20,000 and 25,000, but seven of the 19 bear populations are known to be in decline and predictions are for numbers to decline by 7000 to 8500 in the next 30 years. The shrinking sea ice habitat will threaten bear populations near the south of their range, and only the northern populations will likely hang on well into the future. Various counties and agencies have listed the polar bear as threatened and endangered.
Some insects are thriving as a result of climate change and human activity. One of them is the fire ant, or Solenopsis Invicta, which is a relative of the bees. This species arrived on US soil in 1968 via a container ship from South America. Today, the fire ant occupies much of the south-central to the southeastern US, from Texas all the way to southeast Virginia and southern California. The fire ant behaves like a bee, with some colonies having one queen and others hundreds of queens. A colony has several queens whose role is to reproduce and then invade new territory. And the queens do that well; they can release 3,500 eggs in one day. Fire ants bite humans and small animals by anchoring themselves to the skin, then injecting their poison from their abdomen. In most cases, people are bitten by many ants at the same time.
In most cases, the stings are merely painful, and some swelling disappears rapidly; however, some people turn out to be very sensitive to fire ant bites, developing secondary infections and neurologic complications. There have been about 60 deaths in the US since the ants arrived. Fire ants create large mounds, cause physical damage to the soil, and hinder cultivation. They spread very rapidly, and several colonies have been found to have invaded a yard overnight.
Fire ants do not live where the soil is impacted by a hard freeze, and that is where climate change comes in. As winters become less harsh, and freezing becomes less widespread, the ants have their eyes set on Washington DC and further north.
In this module, you should have learned the following concepts:
You should have read the contents of this module carefully, completed and submitted any labs, the Yellowdig Entry and Reply and taken the Module Quiz. If you have not done so already, please do so before moving on to the next module. Incomplete assignments will negatively impact your final grade.
Now, at the end of this course, we will explore what can be done about climate change. We know that it is happening and that the impacts of climate change are serious, affecting a broad range of important features of our habitat and our economic system. We also know that even if we take immediate serious steps to reduce emissions, climate change will continue to occur because of lag times in the climate system. This means that adapting to this changing world is a necessity. Intelligent adaptation also means mitigation. Mitigation of climate change refers to anything that can minimize the amount of climate change or damages from climate change. It makes sense to pursue both of these things at the same time as we plan for the future, and if we do these things intelligently, there is every reason to expect that we can continue to thrive — but there will be some big challenges. One of the biggest challenges has to do with the fact that the rich nations of the world are quite capable of managing these adaptations, but many of the less wealthy nations will face serious challenges; there will need to be some serious thinking and planning about international aid to make the global adaptations successful for all.
Humans have successfully dealt with dramatic climate change in the past, but it has been a long time since we've confronted this kind of change. Let us not forget that humans were around during the last deglaciation when the temperature rose and sea levels rose more than 125 meters (gradually, though), and even through the Younger Dryas event of ~ 11 kyr ago when the North Atlantic region cooled by > 5°C in a decade or so and then warmed that much a few hundred years later. So, in thinking about how we can cope with climate change, it is not a matter of whether or not we will survive (there can be no serious question about that), but rather how successfully and smoothly we can adapt.
By this point in the course, you also have learned about the observations that tell us how the climate is changing and how those changes are affecting and will continue to affect many aspects of the Earth that are of great importance to humans. In this module, we take a look at the economic dimensions of climate change to better understand the costs and benefits of different approaches to dealing with climate change. We will also explore some possible consequences of different policy decisions to deal with carbon emissions.
In many respects, we are living in the age of climate change — we have just recently assembled the scientific understanding of how the climate is changing and how it is likely to change in the future, and the changes are occurring fast enough that we need to make some decisions rather quickly about what we will do. In many respects, this is the biggest and most important global problem we face at the present time, and in the coming decades; it is a problem that touches most aspects of human activities and welfare.
One could say that there are two extreme responses we could take — do nothing, or do everything within our power to stop climate change immediately. Neither of these extremes makes sense from an economic standpoint. Ignoring problems that are obvious is not a smart move — by doing nothing now, we subject ourselves to huge damages in the future. But going overboard is not smart either — if we allocate all of our resources to counter climate change, we risk damage to the global economic system that we are all dependent upon. The trick then is figuring out what course of action makes the most sense — what course of action will lead to the greatest good for present and future generations. What can we do that will be effective in limiting the amount of climate change while keeping the global economy healthy?
This module will lay the groundwork for carrying out some experiments with a computer model that will allow us to see what the economic costs are of pursuing different policies regarding climate change.
On completing this module, students are expected to be able to:
After completing this module, students should be able to answer the following questions:
Below is an overview of your assignments for this module. The list is intended to prepare you for the module and help you to plan your time.
Action | Assignment | Location |
---|---|---|
To Do |
|
|
There are a variety of ways that climate change will have an economic impact — some are gradual changes such as increased cooling costs for buildings, while others are more dramatic, related to the higher frequency of extreme weather events, such as Superstorm Sandy or the heat wave of 2003 in Europe, which killed tens of thousands of people. The costs of storms like Sandy are immense — New York will spend upwards of 35 billion dollars responding to the damages. This is serious money, and it is the cost of just one storm in one state! Hurricane Katrina racked up damages that are estimated at 100 billion dollars or more.
Because climate change has been and will certainly continue to be variable across the globe, the economic consequences are likewise variable. Some regions are likely to experience a net benefit in many respects, while others are likely to suffer much more serious changes that pose great economic threats. Warming in some areas might lead to a gain in some sectors of the economy, while in others areas, warming might cause a significant economic loss in that same sector. Our goal in this section is to get a sense of what these costs are on a global basis, without losing sight of the fact that the story varies from region to region.
In a general sense, the economists who study this problem tend to divide the costs up into market and non-market costs. A market cost is a cost to some part of the economy that could be quantified in terms of dollars, while a non-market cost is a something that is not easily quantified because there is no market for it. A damaged ecosystem like a coral reef is an example of a non-market cost. The damage to reefs has a cost and the cost is probably multifaceted and widespread, but it is not something that can be measured or expressed in dollars. So, let's try to list some of the important market and non-market costs and then get around to the tricky business of trying to figure out how it might all add up on a global basis.
All crops have an optimum range of temperature and precipitation and as these change throughout the globe, crops will either do better or worse. It is estimated that 80% of the global croplands and almost 100% of the global rangelands depend on rainfall, and as rainfall patterns shift, agricultural production will either increase or decrease. With temperature increases of less than 3°C, the agricultural impacts in the short term are mainly expected to be positive, but beyond 3°, the impacts will be mainly negative. It is likely that there will have to be shifts in the locations where we raise crops and graze animals, and those shifts will cost money as well.
As with agriculture, the consequences of climate change in the forestry sector of the economy will vary from place to place, but one important factor is the spread of new diseases into forested areas that will decrease the productivity of this part of the economy. The recent spread of the pine bark beetle in the western US and the resulting devastation of coniferous forests there is a good example of the potential for future damages. Nevertheless, it is estimated that on a global scale, the forestry sector of the economy will show slight improvements.
Although natural fisheries are in decline globally, the attribution of this decline to climate change is not at all clear. In some regions, the dependence of a fishery on a broader ecosystem such as a coral reef will make it vulnerable to increased warming and acidification. On the whole, it is expected that there will be a continued shift to farmed fish production from the oceans, and this kind of managed food production appears to be less vulnerable to climate change.
Insurance is actually the largest single industry in the world, and insurers are concerned about climate change because it is clear from their records of premiums collected and payments made in relation to weather catastrophes that they are losing their ability to effectively insure people against climate-related damages. The insurance industry is adapting and paying close attention to the relationship between global warming and more frequent severe weather events, but in the future, this will mean higher premiums and a greater cost to the economy.
Global warming and consequent sea level rise will place burdens of the system of roads, pipelines, water supply, water treatment, power transmission lines, etc., that make up the infrastructure of countries. Governments will have to spend more to keep these systems running, and this will cause a drag on the economy.
As the climate warms, we will use more electricity for cooling and less energy for heating, but the balance will vary across the globe. For the US, the Environmental Protection Agency estimates that for a 1°C rise in temperature, we will use 5-20% more energy for cooling and 3-15% less energy for heating, so the net difference might be a very slight increase in energy consumption for cooling. It is interesting to note that the US is the largest user of air-conditioning in the world, but it is expected that China may surpass us by 2020. The best estimates on a global basis indicate that we will spend more on increased air-conditioning than we will save in reduced heating, so this is another economic burden.
In mountainous areas around the globe, more and more snow-makers are appearing, as ski resorts try to keep themselves viable by preventing the ski season from decreasing its duration. There are vast economic stakes in the tourism that is based on recreation of this type. Also, beach resorts face challenges from a rising sea level.
As the climate changes, human health will face challenges that will ultimately cost money. The spread of tropical diseases, such as the West Nile Virus, into areas where these diseases were unknown before, make new demands on health care. Heat waves, which will likely become more severe and more frequent, can pose significant health risks (thousands died in France during a 2003 heat wave).
Coral reefs and many other marine ecosystems are threatened by warming and acidification; coastal terrestrial ecosystems are threatened by sea level rise; polar ecosystems are threatened by loss of ice and warming. All of these are changes that are underway, but their costs to the global economy are nearly impossible to figure out.
As glaciers melt and winter snows are diminished, an important source of freshwater will decline; as precipitation patterns change, surface water will decrease and groundwater aquifers will become depleted even faster. These costs are likely to be very significant in some regions, and those costs will undoubtedly be transmitted to the global economy in one way or another.
Numerous economists have tried to sum up the costs of global warming, including William Nordhaus, who needed a formulation for these costs for his DICE model. In Nordhaus' view, the best way to do it is to express the damages as a percentage of the global economic output, and the relationship he adopted looks like this:
You can see that for a 4°C increase in global temperature, the climate damages amount to about 4% of the global economy, which puts Nordhaus' formulation in line with an estimate by the IPCC of between 2 and 5% for a 4°C rise. This is an exponential function so that the rate of increase in damages increases at increasing magnitudes of warming. Many people have suggested that this curve is probably too conservative at the high-temperature end, and that with such drastic levels of warming, we would experience much greater levels of economic damage.
It might seem like damages totaling 4% are not such a big deal, but consider that the recent devastating earthquake that hit Japan in 2011 caused damages equal to about 3% of Japan's GDP, and the country's economy was severely impacted.
Carbon emissions can be reduced (abated) by a variety of means — improved efficiency, burning cleaner fuels (natural gas instead of coal), capturing the carbon dioxide emitted during combustion at power plants and sequestering it, and switching to alternative sources of energy such as wind, solar, or nuclear, all of which result in lower carbon emissions.
Improving efficiency is an obvious choice, and it involves things like improved mileage for vehicles, better insulation and energy management for dwellings, more efficient light bulbs such as LED lights. Efficiency can also be gained by modifying our behavior such that we do the same things in a way that uses up less energy. For instance, we could make fewer trips to do our shopping by planning more carefully, or we could make better use of carpools, or we could make public transportation really work in our cities.
We could also switch to burning cleaner forms of fuels to generate our electricity or power our vehicles. Natural gas (mostly methane) is a cleaner form of fuel than either coal or gasoline, in part because methane is a simpler form of hydrocarbon — CH4 — and its main combustion products are water and CO2, with minor amounts of nitrous and sulfurous gases that contribute to pollution. In contrast, coal burning releases substantial quantities of these gases along with other harmful gases, some of which contain mercury, which causes long-lasting environmental damage. It is estimated that the US has lowered its carbon dioxide emissions by something like 0.5 GT in the last year, thanks to the increase in gas-burning power plants, utilizing gas from the Marcellus Shale and other gas-rich formations.
Carbon capture and sequestration (CCS) is another way to reduce emissions into the atmosphere. In essence, this involves capturing carbon at the point where it is emitted into the atmosphere (like a big power plant), then liquefying it and injecting it into an underground reservoir. This is a fairly new technology, and many experiments are underway throughout the world to figure out how to make this work. One of the first large experiments is taking place beneath the North Sea, where the Norwegians have injected a large quantity of CO2 into a layer of sandstone from which they had previously extracted oil and gas. For this to work, the CO2 has to stay put and not leak back up to the surface; these experiments are being carefully monitored to see how much leakage occurs. CCS can also be achieved by pumping CO2 into large aquifers, where it reacts with minerals.
The other strategies for reducing emissions revolve around technologies that do not involve burning fossil fuels for energy. Wind and solar are certainly expanding, but they are still not as cheap as generating electricity from fossil fuels. The costs of generating electricity by various means have been studied by the US Energy Information Administration, and some of their results are shown in table below, ranking these energy sources in terms of the system levelized costs (in dollars per MWh of electricity) for a new plant that would come online in the year 2017. The system levelized costs here include the money needed to build and safely maintain a power plant, spread out over the lifetime of the plant.
Plant Type | Total System Levelized Cost |
---|---|
Natural Gas Advanced | 65.5 |
Natural Gas Conventional | 68.6 |
Hydroelectric | 89.9 |
Wind | 96.8 |
Geothermal | 99.6 |
Conventional Coal | 99.6 |
Advanced Coal | 112.2 |
Advanced Nuclear | 112.7 |
Biomass | 120.2 |
Advanced Coal with CCS | 140.7 |
Solar Photo-Voltaic | 156.9 |
Solar Thermal | 251.0 |
Wind - Offshore | 330.6 |
As you can see, natural gas is currently the cheapest form of electricity generation by a wide margin. But it is surprising to see that hydropower, wind power, and geothermal power are all cheaper than coal. Hydropower in the US is already at about its maximum level of generation, but wind and geothermal are currently underdeveloped. But to scale up these other, cleaner forms of energy to the point where they can generate the total demand of our energy system cannot happen overnight, and it will cost money. Part of the problem here is that it is hard to scale up these other energy forms when the price of natural gas is cheaper.
Reducing carbon emissions will almost certainly cost money in one way or another, and the question now is: How much? The answer is not obvious, and it will become cheaper as new technologies make these alternative cleaner energy forms more affordable, but based on the way things stand now, Nordhaus has tried to figure out an approximate answer. Nordhaus assumes that there are some easy gains that cost almost nothing (like the switch from coal to natural gas that has occurred in the US in the last year), but that if we try to make deeper cuts, we have to shift to non-fossil-fuel forms of energy, and then it starts to get expensive.
Here is what Nordhaus comes up with for abatement costs:
What this shows is how the abatement costs vary with the emissions control rate. The emissions control rate is the fraction by which the carbon emissions will be reduced; a value of 1 means that there is a complete stop to carbon emissions and when it has a value of 0, that means that we do nothing to reduce emissions. The abatement costs here are indicated in terms of a percentage of the global economic output, and it is worth noting that even in the extreme case of an emissions control rate of 1 (completely halting carbon emissions), it only costs 6% of the global economy. But, from an economic standpoint, one would say that those costs have to be compared with the possible benefits to all that might come from using that 6% to improve social well-being. How might we benefit from this kind of expense involved with reducing emissions? This is where we need to go back to the costs related to damages caused by global warming — it is a complicated question, one that can be best understood through the use of a model that will calculate the warming related to carbon emissions and at the same time calculate the costs of damages and abatement.
Most of the world's governments agree by now that global warming poses a serious threat to the future well-being of all people, and they agree that it is desirable to reduce the concentration of CO2 in the atmosphere by lowering our emissions of CO2. This consensus was expressed in part through the Kyoto Protocol (signed in 1997, put into place in 2005), which set targets for emissions reductions by the countries of the world (the US, Canada, Andorra, Afghanistan, and South Sudan are the only holdouts). This protocol calls for reductions in emissions that would effectively produce the SRES A1B scenario that we talked about earlier in the course. The individual countries are then left to figure out how to meet the emissions reduction goals. So, how could countries make this work — that is the question.
There are two strategies that a country might adopt — a carbon tax or a cap-and-trade system. The carbon tax approach would reduce emissions by providing a strong incentive to be more efficient and use cleaner energy sources. The cap-and-trade system instead sets an overall limit or cap on emissions and then allocates or auctions off the right to emit carbon, and then allows these emission rights to be bought and sold in a kind of carbon market.
Before we could consider a tax on carbon, we need to know what is the cost of emitting a certain quantity of carbon.
By emitting carbon into the atmosphere, we are effectively imposing an economic burden due to carbon dioxide's contribution to climate change, which will inevitably cost us money. The thing is, we don’t pay these costs as we emit carbon, but they are nevertheless costs that will add up. Economists call these kinds of things “externalities”. They estimate that one ton of carbon emitted to the atmosphere imposes a cost of about $40, and the per capita costs for an American are about $200. As you might expect, this is a very tricky thing to estimate and estimates vary quite a bit.
Let's try to put these numbers into perspective, beginning with the emissions related to a normal American's activities. If you drive 10,000 miles in a year and your car gets 28 miles per gallon of gasoline, you are emitting about one ton of carbon (this is equivalent to 3.67 tons of CO2). A normal household in America might use 10,000 kWh (kiloWatt hours) of electricity, and if this is coming from a coal-burning power plant, the household is emitting about 3 tons per year. If your electricity comes from burning natural gas, the same amount of electrical power use emits just 1.5 tons of carbon.
What if we said that we were going to pay for these externalities related to emitting carbon into the atmosphere via some kind of tax or fee to cover the $40/ton cost? This would mean something like a 10¢ per gallon increase in gasoline and about a 10% increase in the average household electric bill for electricity generated by coal. By doing this, we would be covering the anticipated future costs of our carbon emissions.
But covering the anticipated future costs of carbon emissions is only part of the goal. We also need policies that will help reduce emissions. One might think that the taxes or fees mentioned above could contribute to a reduction in emissions in the sense that by making things more expensive, people will use less, and this is probably true to a certain extent, but we can definitely do more. For instance, imposing a larger tax or fee on gasoline and electricity generated by fossil fuels would provide a stronger incentive to consume less and thus emit less carbon.
Another approach is one known as “cap-and-trade”. In this policy, a state or country establishes an overall limit or cap on the carbon emissions for each year and then allocates or auctions shares of this total to utilities, companies, etc. These entities receive shares (which are like permits for emitting a certain amount of carbon) and can then sell them to others (thus the “trading” aspect to this policy). The appeal of this kind of approach is that there is a fixed limit to the emissions — it is controllable and predictable, whereas a simple “carbon tax” approach does not guarantee any particular total emissions amount and if a country wanted to use such a tax to meet specific goals, the tax would have to be adjusted frequently, which businesses would have a hard time dealing with (they like predictability in things like taxes).
This is the approach that has already been adopted by the European Union, who, in 2005 launched the European Union Emission Trading Scheme. This is the largest greenhouse gas emissions trading scheme in the world and is one of the EU's central policies for meeting their cap set in the Kyoto Protocol. How is it working? The EU's total emissions have steadily declined over this time period, so it seems to be working.
The global climate system and the global economic system are intertwined — warming will entail costs that will burden the economy, there are costs associated with reducing carbon emissions, and policy decisions about regulating emissions will affect the climate. These interconnections make for a complicated system — one that is difficult to predict and understand — thus the need for a model to help us make sense of how these interconnections might work out.
The economic model we will explore here is based on a model created by William Nordhaus of Yale University, who is considered by many to be the leading authority on the economics of climate change. His model is called DICE, for Dynamic Integrated Climate-Economy model. It consists of many different parts, and to fully understand the model and all of the logic within it is well beyond the scope of this class, but with a bit of background, we can carry out some experiments with this model to explore the consequences of different policy options regarding the reduction of carbon emissions.
The DICE model includes a more primitive version of the global carbon cycle model we used in Module 5, but here we will make some adaptations to our carbon cycle model so that it includes the economic components of DICE.
The economic components are shown in a highly simplified version of a STELLA model below:
In this diagram, the gray boxes are reservoirs of carbon that represent in a very simple fashion our global carbon cycle model from Module 5; the black arrows with green circles in the middle are the flows between the reservoirs. The brown boxes are the reservoir components of the economic model, which include Global Capital, Productivity, Population, and something called Social Utility. The economic sector and the carbon sector are intertwined — the emission of fossil fuel carbon into the atmosphere is governed by the Emissions Control part of the economic model and the Global Temp. Change part of the carbon cycle model affects the economic sector via the Climate Damage costs.
Let’s now have a look at the economic portions of the model.
In this model, Global Capital is a reservoir that represents all the goods and services of the global economic system; so this is much more than just money in the bank. This reservoir increases as a function of investments and decreases due to depreciation. Depreciation means that value is lost, and the model assumes a 10% depreciation per year; the 10% value comes from observations of rates of depreciation across the global economy in the past. The investment part is calculated as follows:
Investment = Savings Rate x (Gross Output - Abatement Costs – Climate Damages)
The savings rate is 18.5% per year (again based on observations). The Gross Output is the total economic output for each year, which depends on the global population, a productivity factor, and Global Capital.
The Abatement Costs are the costs of reducing carbon emissions and are directly related to the amount by which we try to reduce carbon emissions. If we do nothing, the abatement costs are zero, but as we try to do more and more in terms of emissions reductions, the abatement costs go up. If we go all out in this department, the Abatement Costs can rise to be 15% of the Gross Output.
Climate Damages are the costs associated with rising global temperatures. The way the model is set up, a 2°C increase in Global Temperature results in damages equal to 2.4% of Gross Output, but this rises to 9.6% for a temperature increase of 4°C, and 21.6% of Gross Output for a 6°C increase. This relationship between temperature change and damage involves temperature raised to an exponent that is initially set at 2 but can be adjusted.
It will be useful to have a way of comparing the climate costs, which is the sum of the Abatement Costs and the Climate Damages in a relative sense so that we see what the percentage of these costs is relative to the Gross Output of the economy. The model includes this relative measure of the climate costs as follows:
Relative Climate Costs = (Abatement Costs + Climate Damages)/Gross Output
Also related to the Global Capital reservoir is a converter called Consumption. A central premise of most economic models is that consumption is good and more consumption is great. This sounds shallow, but it makes more sense if you realize that consumption can mean more than just using things it up; in this context, it can mean spending money on goods and services, and since services includes things like education, health care, infrastructure development, and basic research, you can see how more consumption of this kind can be equated with a better quality of life. So, perhaps it helps to think of consumption, or better, consumption per capita, as being one way to measure quality of life in the economic model, which provides a measure for the total value of consumed goods and services, which is defined as follows:
Consumption = (Gross Output – Climate Damages – Abatement Costs) – Investment
This is essentially what remains of the Gross Output after accounting for the damages related to climate change, abatement costs, and investment.
The population in this model is highly constrained — it is not free to vary according to other parameters in the model. Instead, it starts at 6.5 billion people in the year 2000 and grows according to a net growth rate that steadily declines until it reaches 12 billion, at which point the population stabilizes. The declining rate of growth means that as time goes on, the rate of growth decreases, so we approach 12 billion very gradually.
The model assumes that our economic productivity will increase due to technological improvements, but the rate of increase will decrease, just like the rate of population growth. So, the productivity keeps increasing, but it does not accelerate, which would lead to exponential growth in productivity. This decline in the rate of technological advances is once again something that is based on observations from the past.
The model calculates the carbon emissions as a function of the Gross Output of the global economy and two adjustable parameters, one of which (sigma) sets the emissions per dollar value of the Gross Output (units are in metric tons of carbon per trillion dollars of Gross Output) and something called the Emissions Control Rate (ECR). The equation is simply:
Emissions = sigma*(1 -ECR)*Gross_Output
Currently, sigma has a value of about 0.118, and the model we will use assumes that this will decrease as time goes on due to improvements in efficiency of our economy — we will use less carbon to generate a dollar’s worth of goods and services in the future, reflecting what has happened in the recent past. The ECR can vary from 0 to 1, with 0 reflecting a policy of doing nothing with respect to reducing emissions, and 1 reflecting a policy where we do the maximum possible. Note that when ECR = 1, then the whole Emissions equation above gives a result of 0 — that is, no human emissions of carbon to the atmosphere from the burning of fossil fuels. In our model, the ECR is initially set to 0.005, but it can be altered as a graphical function of time to represent different policy scenarios.
The social utility reservoir is perhaps the hardest part of the model to understand. This reservoir is the accumulated sum of something called the social utility function, which depends on the size of the global population, the per capita (per person) consumption, and something Nordhaus calls the social time preference factor, which includes a discount rate and another parameter (alpha) that expresses society’s aversion to inequality.
One of the most obvious ways of addressing climate change is to reduce carbon emissions. This can be done by developing alternative energy sources, by capturing and sequestering carbon from power plants, by developing more efficient technologies, etc., but all of these cost money, and they will continue to cost money into the future. How should we value those future costs at the present time?
The concept of a discount rate is an important one for this kind of economic modeling since it provides a way of translating future costs into present value. Here is how an economist might think about it: imagine you have a pig farm with 100 pigs, and the pigs increase at 5% per year by natural means. If you do nothing but sit back and watch the pigs do their thing, you’d have 105 pigs next year. So, 105 pigs next year can be equated to 100 pigs in the present, with a 5% discount rate. Thus, the discount rate is kind of like the return on an investment. Now, think about climate damages. If we assume that there is a 4% discount rate, then $1092 million in damages 100 years from now is $20 million in present-day terms. It may seem odd to treat damages like this — they do not reproduce naturally like pigs — but it does make sense, if you consider that our global economy is likely to grow quite a bit in 100 years so that something worth $20 million today will be worth $1092 million in 100 years. The 4% figure is the estimated long-term market return on capital.
Why is this important? We would like to be able to see whether one policy for reducing emissions of carbon is economically better than another. Different policies will call for different histories of reductions, and to compare them, we need to sum up the expected future damages associated with each policy. The discount rate is the way to do this. It is important to think a bit more about what this means. If the discount rate is higher, then huge damages way off in the future are given little weight in the present day, whereas if the discount rate is zero, then the damages in the future are considered to be huge in the present. So, an economic model with a larger discount rate tends to favor doing little at the present time; a smaller discount rate tends to favor policies that take significant steps in the immediate future, thus avoiding damages and costs further down the line.
Getting back to the social utility function, it may help to think of this as a function of the per capita consumption and the assumptions about discounting future costs and benefits. This part of the model does not feedback on any other part of the model, so it is kind of like a scorecard for the economic parts of the model.
In Nordhaus’s DICE model, the goal is to maximize this Social Utility reservoir — to make it as big as possible through different histories of emissions reductions. The emissions history that yields the largest Social Utility is then deemed to be the best course of action.
One might wonder if there is a need for any kind of government regulation in order to curb emissions of carbon dioxide. Some people are of the opinion that there is already too much regulation and that these kinds of problems can just take care of themselves. Isn't it enough that we all recognize that to avoid the damages from climate change, we need to reduce our emissions of greenhouse gases?
The problem here is a lack of economic incentive in dealing with an entity like the global atmosphere that is shared by all and owned by none. This problem has been recognized for a long time, but was first made popular by an ecologist, Garrett Hardin, in 1968 and is commonly known as "The Tragedy of The Commons." Below is a video that describes the essence of this idea.
Hardin's concept is fairly simple, and he illustrated it with a kind of parable, as described in the video. Suppose there is a common piece of grassland in a village — the commons — and it is owned by no one but is available for all to use. These commons still exist in many areas, such as the village of Comberton in England.
In most cases, the use of the commons is regulated by the community, but in this case, we'll pretend there are no regulations. People start to graze their sheep on this nice grass and they benefit from that. Based on the size of the field and the rate of grass growth, there is a carrying capacity for the field — the maximum number of sheep the field can support in a sustainable manner. If you put more sheep on the land than the carrying capacity, the resource will dwindle and eventually disappear altogether and, at that point, it will be of no use to anyone. But as long as there is any grass at all, it is to each individual's benefit to continue to place new sheep in the field. So, overgrazing is inevitable in this case, and the common resource is depleted. There are many documented examples of this kind of occurrence, and they all are related to cases where there is an open-access resource available to everyone.
How can this tragedy of the commons be avoided? One way would be for the community to impose a cap-and-trade system on grazing. Here is how it could work. The community studies the problem and figures out what the carrying capacity of the field is in terms of the number of sheep. Then they allocate grazing share equally to everyone in the community. Community members can buy and sell these shares so that if someone does not want to deal with sheep, they can still benefit from the common resource by selling their shares to someone who is willing to graze more sheep. The shares would be re-allocated each year in case the carrying capacity changed. Another approach might be for the community to sell the land to individuals and then let each individual farmer manage their own plot of land, in which case they would have an economic incentive to manage their land in the best way possible, avoiding the overgrazing problem. The community might place some restrictions on what the owners could do with the land, like preventing them from putting up apartment buildings or a feedlot; this would effectively be like the zoning regulations that most communities have.
The point to take away from this is that when you have a commonly held resource with open access, everyone has to act together in a coordinated, regulated way in order to avoid depleting or damaging the resource and ensuring that the resource serves the best interests of everyone affected by the resource. In the case of carbon dioxide emissions into the atmosphere, the best interests of everyone can only be served if there is some form of a regulatory plan; otherwise, we will succumb to the tragedy of the commons. Furthermore, the reduction of carbon dioxide emissions has to be coordinated so that each country has confidence that if they do their part, the other countries will do their parts and the global concentration of CO2 will stabilize or even become lower. This has, in principle, already been done and agreed upon by most of the countries of the world through the Paris Climate Accord.
The Inflation Reduction Act of 2022 is likely the most aggressive step taken in the US policy realm towards emissions reduction. The Act includes numerous components including lowering the price of prescription drugs, increasing tax rates for large corporations, funding for IRS tax enforcement, and lowering the budget deficit. But the key part from a climate point of view is to promote the use of clean energy, thus reduce emissions and set the US on a path towards meeting its Paris climate goals. The Act includes significant funding for installation of solar panel by households using tax incentives and increased tax rebates for purchases of electric vehicles. More broadly, the Act includes funding for states and local municipalities to install charging stations for electric vehicles and to convert to renewable energy production. The Act also includes tax incentives for corporations to produce renewable energy. Finally, there is funding for cutting-edge technologies such as carbon capture and storage and clean hydrogen. The EPA has estimated that the Inflation Reduction Act will reduce US CO2 production from 2005 levels by 35 to 43 percent by 2030. This is a start, but more policies will be required in the near future if we are to limit warming to 1.5 oC.
As we have seen in Module 9, climate change and population growth will present some major challenges for our agricultural system, and these challenges vary around the globe, and they also vary according to the kind of agricultural activity — raising livestock, growing grains, growing fruits, etc.
In the future, more carbon dioxide in the atmosphere will help some crops to grow faster — this is the CO2-fertilization effect we discussed in Modules 5 and 9. In many regions, a small bit of warming will also enhance plant growth. These two effects, combined, mean that under a moderate warming scenario such as that associated with the SRES A1B scenario (the optimistic one), the global agricultural output is expected to grow slightly.
However, most plants have an optimum growth temperature, and if the temperature rises too high, the growth of these plants declines rapidly. For instance, corn productivity declines rapidly above 95°F and soybeans decline rapidly above 102°F. In many areas, the warming will not exceed this optimum temperature range, so yields will probably increase slightly due to the combination of higher CO2 and warmer temperatures. However, more frequent floods and droughts may reduce yields, as can be seen in the recent history of corn production in the US:
Furthermore, it appears that many weeds will also grow much better in a warmer climate, increasing the need for the use of herbicides in order to keep yields high.
Other types of crops will have different challenges in a warmer climate. For instance, some fruit crops do not necessarily benefit from any heating at all, and many fruit crops may be damaged by the earlier arrival of spring — they are encouraged by the early spring warmth to flower earlier, but this then leaves that year's crop vulnerable to a subsequent frost damage, which can severely limit the fruit production. These frosts following a premature warming are likely to increase in the future.
Livestock may be at risk, both directly from heat stress and indirectly from reduced quality of their food supply. The grasses in pastures do not respond much to increased CO2, but they do decline with higher temperatures and less water. A good portion of the rangeland in the US is in areas that can expect less precipitation and less surface water in the future, so livestock yields may suffer.
Fisheries will be affected by changes in water temperature that shift species ranges, make waters more hospitable to invasive species, and change lifecycle timing.
So, how can we adapt to these changes? One way is by being more careful about water management, focusing on water efficiency to deal with times of drought. Another measure is to plant a more diverse mix of crops in more regions of the country so that if a flood, drought, heat wave, or late frost occurs, not all of the agricultural production will be reduced. Yet another measure is to develop new strains of crops and grasses that are better suited to the new climate conditions. Developing new strains is expensive, but is made a bit easier by the fact that there is already a vast, diverse agriculture in the more tropical regions, and we can effectively adapt methods and crop varieties that are already utilized in these warmer areas. A final adaptation strategy involves changing the places where we grow certain crops, shifting the location to find the optimum set of growing conditions. This last option would be expensive if we attempted it over a short time span, but there is a gradual shift; the economic impacts are likely to be limited.
The issues with water supply are largely related to changing precipitation patterns that will leave many areas drier and with more frequent droughts, while other areas will have to cope with greater precipitation and more frequent severe floods. For the US and Central America, the prospects for changes in surface water are quite clear, as shown in the model prediction for the year 2080 under the optimistic A1B scenario:
If there is a silver lining to the red regions of the above map, it is that the water reductions will occur in a part of the country that has already had to confront water shortages — people in the west and southwest already know the importance of water conservation, and they are already taking measures to make sure that their water supply will be adequate. A big part of this is attitude adjustment — getting used to using less water. Another part of the adaptation strategy is to greatly increase water efficiency — doing the same basic things with less water. Another part is water storage in dams and aquifers to take advantage of the rainfall during wet years, making it available for drier years. Yet another part involves developing technologies for recycling water.
Many areas of the country will instead have to deal with more water, and the consequences include a greater strain on storm sewer systems and related flood-control systems. In the low-lying coastal parts of the country, sea level rise will lead to salt-water intrusion into shallow aquifers, rendering those aquifers unsuitable.
For the US, the EPA has already developed a long-term strategy for dealing with these water-related aspects of climate change, including programs to assist local and state governments and utilities to plan for the future challenges.
Energy production and use are the most important means by which we are altering the global climate and, in turn, the changing climate will impact our energy production and consumption in a variety of ways.
Warming will be accompanied by decreases in demand for heating energy and increases in demand for cooling energy. The latter will result in significant increases in electricity use (most cooling uses electricity; heating uses a wider array of energy sources) and higher peak demand in most regions. This picture obviously changes around the country and the globe, but there has been more growth in the US in regions that are mainly cooling regions, thus exacerbating this effect. The general picture is illustrated by looking at 4 major cities in this graph that shows the degree days for heating and cooling at the present and as expected in the future under different emissions scenarios.
One heating degree day is one day where the outside temperature is 1°F below 65°F; two cooling degree days is either one day at a temperature of 67°F or two days at 66°F. If you then sum these up for a whole year, you have the data shown in this figure (Recent History bars). The colder cities like Chicago and New York have larger heating degree day sums than cooling degree day sums.
Energy production is likely to be constrained by rising temperatures and limited water supplies in many regions. Power plants are the second biggest user of surface water (after irrigation) and the hotter it is, the more water is needed to cool the plants. At the same time, as the water warms, you need more of it to accomplish the same cooling job. A powerful example of this effect comes from the French heat wave of 2003, during which a number of nuclear power plants had to halt production because the cooling water was not cool enough to safely continue generating electricity.
Energy production and delivery systems are exposed to sea-level rise and extreme weather events in vulnerable regions. A good example of this comes from Port Fourchon, Louisiana, which supports 75% of deepwater oil and gas production in the Gulf of Mexico, and its role in supporting oil production in the region is increasing. The Louisiana Offshore Oil Port, located about 20 miles offshore, receives 1.3 million barrels of oil per day and transfers it to refineries, accounting for 50% of our nation's refining capacity, making this one of the most important components of our energy production system. One road, Louisiana Highway 1, connects Port Fourchon with the nation, and it is increasingly vulnerable to flooding during storms due to the combined effects of sea level rise and ground subsidence. Louisiana is currently upgrading Highway 1, elevating it above the 500-year flood level in order to prolong its viability.
Climate change is also likely to affect some hydropower production in regions subject to changing patterns of precipitation or snowmelt.
How can we adapt to these coming changes? Part of the solution is to identify vulnerable features of our energy production system (such as Port Fourchon) and then safeguard them from the expected consequences of continued warming. Another big part of the adaptive strategy is to reduce our consumption of energy by increasing efficiencies in transportation (which we are already doing), home appliances, and buildings. New York City and Chicago have both undertaken massive programs to minimize the heat absorbed by their cities by planting trees and installing reflective or green roofs on buildings. Trees and green roofs cool by evapotranspiration, and the effect can be significant. These steps will reduce the cooling demands in these cities. We can also reduce the demand on power plants by more distributed, small-scale energy production that is carbon neutral. This is happening in a big way in many parts of Europe, where fields of solar panels have been springing up in farmers' fields — they are growing energy, attracted by strong support from the state-run utilities responsible for electricity.
There are a variety of human health concerns associated with warming in the future. The most serious concern is related to deaths caused by heat waves.
As discussed in Module 2, late June 2021 saw record-breaking heat in the Pacific Northwest of the US and Canada, with Portland, Seattle, and Vancouver shattering previous temperature records. Temperatures reached 116 degrees C in Portland, 108 degrees C in Seattle, and 121 degrees C in Lytton in British Columbia. The heatwave resulted from a massive heat dome that developed and sat over the area. This dome of hot air was very dry and led to wildfires.
This area was truly vulnerable to heat as only about 40% of homes have air conditioning and residents are not used to extreme heat. Thus, estimates are that over 500 people died as a result of the heat, many of whom in vulnerable populations, older and poorer people.
Climate models show that this region will need to become accustomed to the heat, and these severe temperatures will occur regularly as the 21st century progresses. In fact, temperature data clearly show warming in this region over the last century.
In the US as a whole, the future almost certainly will have more frequent hot days and nights. This can be seen in the following set of maps, which show the number of days over 100°F for the recent past and the end of this century under two different emissions scenarios:
The message here is that throughout most of the country, we will be challenged with many more hot summer days. By the end of the century, a cool, northern state like Minnesota will have more +100°F days than the southern tip of Texas. What can be done about this? How can we adapt? Can we mitigate the impacts of these extreme heat events?
Some excellent examples of how to adapt to and mitigate the consequences of these extreme heat events come from several US cities. Philadelphia, in 1995, launched a Hot Weather Health Watch and Warning System to respond to health threats from more frequent heat waves. The system includes education, media alerts, information hotlines (no pun intended), and cooling shelters. They estimate that this system has already averted more than 100 heat-related deaths. Both New York City and Chicago have taken steps to cool their downtown areas by covering roofs with reflective materials and plants; these steps will mitigate the overheating in the city centers, where temperatures can be as much as 4°C warmer than the surroundings. In addition to minimizing heat-related deaths, these measures will also reduce the amount of electricity needed for cooling, thus providing an extra benefit. Many regions are modifying building codes to promote new buildings that are easier to keep cool (and easier to keep warm).
So, there are some relatively easy measures that can be taken to adapt to more frequent heat waves.
As the climate warms, many areas are already finding that new infectious diseases are becoming a problem. One familiar example of this is West Nile Disease, a disease spread by a species of mosquito that is normally found in warm regions of the world. This disease, once introduced to the US, has spread rapidly, aided by generally warmer temperatures. There have been more than 30,000 cases of West Nile Disease in the US and over 1,000 deaths. In the US, we are already adapting to this new disease through education and the use of pesticides to limit the spread of the mosquitoes. Similar new diseases can also be adapted to, but public health officials will have to be diligent in looking out for the arrival of the new diseases — which is something they already do.
In summary, although warming will bring new challenges to human health, the adaptations are relatively straightforward, and in many places, they are already beginning.
As the climate warms, there will be a variety of impacts on our transportation system, which is a critical element of our entire economic system.
Sea-level rise and storm surge will increase the risk of major coastal impacts, including both temporary and permanent flooding of airports, roads, rail lines, and tunnels. Flooding from increasingly intense downpours will increase the risk of disruptions and delays in air, rail, and road transportation, and damage from mudslides in some areas. The increase in extreme heat will limit some transportation operations and cause pavement and track damage. In Alaska, the melting of permafrost has already begun to compromise roads, railways, and pipelines. On the plus side, decreased extreme cold will provide some benefits such as reduced snow and ice removal costs.
Federal, state, and local agencies are already taking steps to protect transportation systems from climate change impacts. Adaptation measures across the country are shaped by local impacts. Specific adaptation approaches include:
These adaptations are relatively easy to implement since the transportation infrastructure is in nearly constant need of upkeep — it simply will cost a bit more and will require some foresight in how upgrades are made.
At the same time, changes in transportation are a large part of the solution to the climate problem. In the US, cars and trucks are the largest producer of CO2, producing 28 percent of emissions annually. To adhere to the Paris Agreement emissions goals the US is going to have to control this output. The Obama administration established mandatory fuel standards for cars and trucks, specifically that automakers were to produce cars with an average of 51 miles per gallon by 2025. These standards were loosened by the Trump administration. However, recently the Biden administration strengthened the Obama fuel standards with stringent rules for cars and trucks produced after 2023, but also is requiring US automakers to produce 50% electric vehicles by 2030. This goal is now clearly feasible as battery technology has advanced so significantly recently. These strategies are a central part of the US pledge to cut emissions by 50% from their 2005 levels by 2030.
On a per passenger basis, planes emit a lot more CO2 than cars. Short flights are especially polluting so alternative forms of transportation such as trains are environmentally beneficial. Planes inject CO2 into the upper atmosphere where it has a longer residence time. There are many more cars than planes obviously so the total pollution is higher but cutting back on plane transportation is key to emissions reduction.
Some aspects of climate change in the future are so challenging that the only real option is for people to migrate or relocate. One recent study concluded that there will be as many as 200 million climate-related migrants on the move in search of livable conditions by 2050. There are several different climate-related changes that may trigger migrations and relocations, but the main reasons have to do with rising sea level and reduction in surface water availability for agriculture and basic living.
Inhabitants of low-lying islands such as the Maldives (whose highest point is just 2.3 m above sea level), or low-lying coastal areas such as Bangladesh, are very clearly in for trouble as the sea rises. These areas are part of what has been called the Low Elevation Coastal Zone (LECZ) that lies within 10 meters of present sea level. Overall, this zone covers 2% of the world’s land area but contains 10% of the world’s population and 13% of the world’s urban population. Many of the countries with a large share of their population in this zone are small island countries, but most of the countries with large populations in the zone are large countries with heavily populated delta regions. Almost 65% of cities with populations greater than 5 million fall, at least partly, in the LECZ. In some countries (most notably China), urbanization is driving a movement in population towards the coast. Many of these places do not have the resources to keep building higher and higher sea walls to keep the water out, and even if they did, it might still be a losing battle. So, they will have to move to somewhere else.
Alaskan coastal and river communities are experiencing greater erosion and flooding because of increased storm activity and windiness; reduced sea-ice extent, which increases the intensity of storm surges; and thawing of permafrost, which increases susceptibility to erosion. Traditionally, many of these communities were semi-nomadic, moving inland during periods of severe storms, and had little permanent infrastructure. During the past 100 years, however, their mobility has been reduced by the building of houses, schools, airports, and other permanent facilities—changes that have increased their vulnerability to climate change. Six Alaskan communities are now planning some type of relocation. However, no funds have been appropriated to begin the relocation process. The U.S. Army Corps of Engineers has identified 160 additional villages in rural Alaska that are threatened by climate-related erosion, with relocation costs estimated at $30-50 million per village.
If we take a global inventory of the areas within the LECZ, we see a number of important features:
In most of the regions of the world, the majority of the population in the LECZ reside in cities; Asia is the big exception to this. The fact that most of these at-risk people live in cities raises problems for relocation since cities are based on an extensive infrastructure that is not easy to recreate in another place, while more rural populations are probably easier to relocate. Asia, in addition to having the greatest overall population, also has by far the greatest number of people who will have to deal with moving to higher elevations as the sea level rises.
On an individual basis, some countries are much worse off than others, as can be seen in this ranking of countries based on the total population that lives in the LECZ:
Asian countries occupy 8 of the top 10 here, so this will be the region of greatest concern when it comes to relocation problems related to sea-level rise.
If you look at the ranking in terms of the percentage of the population at risk from sea-level rise, you see the following:
This list excludes countries with a total population less than 100,000, or smaller than 1,000 km2. If all countries were included, 7 of the top 10 would be places with populations of less than 100,000, the top 5 having more than 90% of their country in the LECZ (Maldives, Marshall Islands, Tuvalu, Cayman Islands, Turk and Caicos Island).
In many parts of the world, especially rural areas, the local economy is based on agriculture that is made possible through irrigation, and most of that irrigation water comes from the diversion of rivers. As the climate changes, many areas will see a decline in the surface water needed for agriculture, and so these local economies will struggle. The global pattern of change can be seen in this figure, which is an average from many different climate models for the 20 year period centered on 2080:
People will lose jobs — not just farmers, but also the many people whose work is somehow connected to agriculture — and these people will have to seek work elsewhere. Most of these people will move to cities, accelerating a general trend over the past few decades. Many of these migrations or relocations may not be dramatic or traumatic, but in some cases, they will cause real hardship. The vast majority of the people who will be required to move are lower-income people who do not have the means nor the political capital to make these transitions easily.
The above global map does not tell the whole story though, since the timing of water availability is important. In the whole region of Asia surrounding the Himalayan mountains, the rivers that drain from mountain glaciers are an essential part of agricultural economies. The glaciers melt back in the summer and provide important streamflow during a period of the growing season when rainfall is low. These glaciers are melting rapidly, and once they disappear, the dry-season glacial meltwater will not be there to supply the irrigation water. As a result, agricultural production and the work related to it will also dry up. In Pakistan, for example, 90% of the agriculture is based on irrigated water diverted from the Indus River, which is fed by Himalayan glaciers.
The global map above highlights an important reduction in surface water in the region of Central America. This is likely to increase the existing pressure for migration to the US.
Geoengineering is the intentional global-scale modification of Earth’s climate system, in order to reduce global warming. There are two general categories of geoengineering schemes — CO2 removal and insolation reduction.
These projects seek to reduce the insolation (incoming solar radiation) by deflecting sunlight, or by increasing the albedo (reflectivity) of the atmosphere. They do not reduce greenhouse gas concentrations in the atmosphere, and thus do not seek to address problems such as ocean acidification caused by these gases. Solar radiation management projects appear to have the advantage of speed and, in some cases, costs. There are a variety of ways that we might achieve a reduction in insolation and thus cool the planet:
Carbon dioxide removal projects address the root cause of warming by removing greenhouse gases from the atmosphere. These projects are generally slower and more expensive than some of the insolation reduction schemes. There are a variety of ways that this could be done, including:
Although these geoengineering schemes may be attractive in the sense of providing a solution to the problem without having to get all the countries of the world to make a dramatic reduction in CO2 emissions, some of them clearly have potentially harmful consequences.
As a general rule, when people try to control natural systems, they find that the natural systems are more complex than they realized, with many unintended consequences. This is likely to be the case with geoengineering as well. With many of these schemes, there may be winners and losers — cooling the climate in one region (or the globe as a whole) may lead to devastating droughts, floods, or damaging cold in other regions. Adding sulfur to the atmosphere will lead to acid rain, and it may also deplete the ozone layer, exposing us to more harmful ultraviolet radiation from the sun.
The aerosol injection schemes are so inexpensive that one country could decide to take action unilaterally, and this would likely lead to serious problems in international relations. All of the insolation management schemes have the problem of not addressing the CO2 emissions, which would allow the CO2 concentration in the atmosphere to rise to very high levels; then if the geoengineering scheme failed for some reason, the climate would warm much faster than anything we have experienced so far.
Most scientists agree that geoengineering schemes should not be seen as the silver bullet that will solve our global warming problems quickly and painlessly. These schemes are like treating the symptoms of a disease rather than the root cause, a course of action that is never a real solution. But until we find a way to treat the root cause (reduce emissions of CO2), some of these schemes may buy us more time while helping us avoid serious climate damages. While many climate scientists believe that there may be an important role for these schemes in our future, we do not yet fully understand the potentially harmful side effects of these projects, so continued research is very important.
If you live in a wealthy, highly-developed, technologically sophisticated country like the US, it is fairly easy to see how we will be able to adapt to climate change without undue suffering. Of course, we will suffer the consequences of severe storms and droughts, and we will eventually have to move a lot of people and activities from places like the Mississippi Delta region. But these are not existential threats of the type faced by the millions of people living on less wealthy low-lying islands, who will eventually see their island homes submerged. And many other nations do not have the technological abilities, wealth, and global influence to make the necessary adaptations.
This is very much a problem related to the tragedy of the commons mentioned in Module 11. The commons, in this case, is the global atmosphere, which is shared by all, and everyone has the freedom to put as much CO2 into the atmosphere as they want (unless they were to be truly restricted by something like the Paris Agreement. Everyone feels the effects of the total concentration of CO2, but not everyone contributes equally to this effect. But individual actors (countries) have little motivation to halt their CO2 emissions (which costs money) for the benefit of others. The only way to solve these kinds of problems is through regulation. But regulation does not help us with the question of how to deal with the problems associated with the climate change that will occur in the future. The map below shows that countries most impacted by climate change are not those emitting the majority of the CO2.
Try to place yourself for a moment in the shoes of a Maldive Islands citizen or someone in Fiji or the Solomon Islands. The sea level is rising and will continue to rise in the future due to global warming, and your country will no longer be viable for living — you will have to migrate and relocate somewhere else. But where? And how will you find the resources to make this move — acquiring a new house, a new job? (If you own a home or property, it will obviously be worthless.) You might very well ask yourself — whose fault is this? And you might feel that the blame lies primarily with the big emitters of CO2 — the big, wealthy nations of the world. But how can you get them to help you with your problem? What power do you have?
You can see that this is a tricky problem. There is a big question/problem of equity when it comes to climate change — the biggest emitters of CO2 bear the most responsibility for climate change, and these are also among the wealthiest nations. These big emitters are the ones best positioned to successfully adapt to climate change; they are also the ones with the resources to help some of the poorer victims of climate change. Should these big emitters help to bear the costs of climate adaption for the less wealthy nations that have emitted much smaller shares of CO2 into the atmosphere? If so, how could they be induced to do so? These are important questions, and they do not have good answers or widely-accepted answers at this point. Some people are advocating a globally assessed carbon tax, in which different countries would be taxed at different rates based on their historical emissions histories; the money would then be used to help with adaptation and relocation where necessary. But this is just a proposal. And to implement it would require the kind of international agreement that has proven elusive so far with respect to climate change.
This is very clearly an important aspect of climate change and our future that is in need of a solution.
The goal of this exercise is to explore a range of scenarios for our future in such a way that our energy needs are met, our economy is strong, and our climate is controlled to some extent. The underlying premise is that if do nothing and continue down our current path, the climate will warm to truly dangerous levels, which will have serious consequences for our economy and thus our quality of life. The options we will explore include shifting to renewable energy sources, conserving energy (being more efficient so that we do the same things with less energy consumption) — both of these limit the carbon we emit to the atmosphere. We will also explore two other options that sometimes are called “geoengineering” — one involves injecting sulfate aerosols into the stratosphere to block a little of the sunlight and another involves the direct removal of carbon from the atmosphere, sometimes called negative carbon emissions.
All of these choices involve costs, and the model calculates these costs. Another kind of cost comes in the form of climate damages, and the model calculates these too. From an economic standpoint, the best scenario is one that minimizes the costs because these costs represent a drain on the global economy; the global economy will be better able to meet all of the needs of humanity if we can keep the costs down. The figure below, modified from the one you’ve seen before in Module 5 on the carbon cycle, shows the general scheme the model uses to calculate all of the costs.
It might help to look at this backward from the total costs, which sums the climate damage costs, the costs of fossil fuel energy, the costs of renewable energy, the costs related to energy conservation, and the costs related to geoengineering. The climate damage costs are related to the temperature change, and these costs go up as the square of the temperature increases, so that the costs related from going from 5°C to 6°C are considerably more than going from 1°C to 2°C. The fossil fuel costs, renewable costs, and conservation costs are related to how much energy is provided by those sources. The ideal thing from an economic standpoint is to have the smallest Total Costs because that means there is more money to pour back into the economy and to provide a higher quality of life.
At the core of this model is the global carbon cycle model coupled with the simple climate model — you’ve seen these before. The model also calculates the energy demands and carbon emissions that reflect the population size and choices we make about how much to pursue conservation and renewable energy sources. The model you will work with here has some additions that represent two geoengineering solutions — the addition of sulfate aerosols into the stratosphere to block some of the sunlight (SAG for sulfate aerosol geoengineering) and the direct removal of carbon dioxide from the atmosphere (DRC).
The idea behind sulfate aerosol geoengineering has its origins in the studies of how volcanic eruptions affect the climate. When volcanoes erupt, they a combination of ash (tiny fragments of volcanic rock and glass) and gases, including water vapor, carbon dioxide, sulfur dioxide, and others. The sulfuric gases condense into little droplets called sulfate aerosols that can reflect sunlight, cutting down on the solar energy that drives our climate system — this causes a cooling effect that can last for a couple of years, ending when the aerosols finally fall back to the surface. The key to this is that if the force of the eruption is great enough, the sulfate aerosols end up in the stratosphere (higher than 15 km in altitude); gravity is weak there and there is essentially no water to wash the aerosols out, so they can stick around for a few years. The idea that we humans could add sulfate aerosols into the stratosphere to cool the climate (or prevent further warming) was first suggested by the Nobel Prize-winning atmospheric chemist Paul Crutzen in 2006. Crutzen admitted that a much smarter way of dealing with the problem of global warming was to drastically reduce our carbon emissions, but he pointed out that humanity has not yet shown the resolve needed to tackle this problem; so he proposed this as a potentially easier way to avoid the dangerous consequences of further warming.
This is kind of a thorny issue for a number of reasons. Some people worry that pursuing this kind of solution is dangerous because it just treats the symptoms of the problem without tackling the underlying cause, which is the excess CO2 we keep adding to the atmosphere. Many people see this as dangerous in the sense that tampering with natural systems almost always has hidden unintended consequences. In fact, sophisticated 3D climate model simulations with the addition of sulfate aerosols suggest that the cooling effects would not be uniform and it would change patterns of precipitation as well, which means that some areas of the globe might suffer, while others would see benefits. But, if you are in a car going down a hill and your brakes are not working, you may need to consider doing something other than stepping on the brake pedal — and time is of the essence!
How would this work? Crutzen and others since him have worked out a variety of scenarios for getting the sulfate into the stratosphere, including sending loads of it up in balloons and dumping it out of huge military airplanes that can fly high enough. The general idea is illustrated in the figure below, where the red plus and minus signs indicate changes to the solar energy caused by the sulfate aerosols.
Although this is difficult to price out, some of the estimates (Robock et al., 2009) are that it would cost between 1 and 30 billion dollars per year to add one teragram of (1 Tg = 1012 g) of sulfate into the stratosphere per year. 1 Tg of sulfate, evenly distributed in the stratosphere is estimated to block about 1.3 W/m2 of solar energy, which is about 0.4% of the total. In our simple climate model, changing the solar input by 1 W/m2 changes the temperature by about 0.35°C. One of the challenges is that the sulfate aerosols fall out of the atmosphere, so we would have to constantly add it to maintain the desired level of solar reduction to combat the warming. In the model, if you activate this geoengineering scheme, you set a desired limit to the global temperature change and sulfate is added in the necessary amount to keep the temperature change close to this limit. This then generates a cost that affects the global economy.
The idea of carbon sequestration has been around for a relatively long time — emissions from a fossil fuel-burning power plant can be captured at the source and processed to remove the CO2. The extracted CO2 is then pumped deep underground where it can reside in the pore spaces of sedimentary rocks. In some cases, it is pumped back into the rocks from which oil or gas have previously been removed. Carbon sequestration is expensive, but it can virtually eliminate the carbon emissions from some power plants. However, carbon sequestration does not actually remove CO2 from the atmosphere, and if we were to continue to burn fossil fuels in cars and in homes, then the concentration of CO2 in the atmosphere would continue to rise. Carbon sequestration has not really taken off because the extra expense means that it is far cheaper for a power company to produce new electricity using solar or wind systems.
The direct removal of CO2 from the atmosphere sometimes called negative emissions is a relatively new idea — the first facility was put into action in the fall of 2017 in Iceland. A Swiss company called ClimeWorks developed a partnership with a geothermal power plant to essentially test the technology. There are several other companies in Canada and the US at earlier stages in development. The general idea is to pump huge volumes of air through a chamber in which there are numerous small beads coated with a substance that effectively grabs onto CO2 molecules. When the beads have absorbed as much as possible, the chamber is sealed off and the CO2 is released by changing the humidity in the chamber. The resulting air in the chamber has a very high concentration of CO2; it is mixed with water and then pumped deep underground. The CO2-rich solution reacts with the basaltic bedrock and new minerals are formed, locking up the CO2. This process essentially takes the CO2 out of the air and turns it into rock — pretty clever! The general scheme is shown in the diagram below.
This process is in the early stages of development, so it is difficult to know how much it will cost. The companies doing this believe that they can probably get the cost down to $366 billion per Gt C removed. There would have to be thousands of these units set up around the world in order to scale this up. In our model, we are going to assume that if we were to begin this process, we would initially be limited to 5 Gt C removed from the atmosphere per year, but that as time goes on, our capabilities would increase and we could draw more and more out of the atmosphere.
This process of direct removal of carbon from the atmosphere has some consequences for other parts of the carbon cycle. As we remove CO2 from the atmosphere, the concentration decreases, which means that CO2 stored in the oceans will begin to flow into the atmosphere. This means that we will need to remove much more carbon from the atmosphere than you might think if we are aiming for a given concentration of CO2.
Click here to go to the model [383]. Please Note: The model in the videos below may look slightly different than the model linked here. Both models, however, function the same.
When you open the model, you will see that there are a lot of controls, reflecting the full range of choices we can make about our future energy consumption and geoengineering. There are also 15 pages of graphs that show the results of the model. Be sure to watch the video below that introduces you to the model before proceeding.
Be sure to watch the video below that shows how to do problems 1-3.
In this first experiment, we will use a combination of conservation of energy and greater reliance on renewable energy sources to limit climate change. Open the model and run it without making any changes — we’ll call this the “do nothing” scenario. You will see that we end up with 6.55°C of warming by the end of the model run in 2210, and if you look through the other graphs, you will see that the ocean pH drops to 7.68 (graph page 2). Looking at some of the other graphs, we see that this scenario results in a bit over 19 thousand dollars per person (graph page 14) in terms of the total costs (energy, conservation, climate damages, geoengineering), and an ending net economic output of about $460 trillion.
Now, change the model as described below:
Green colored sliders
Conserve upper limit: 30
Conserve growth rate: 0.1
Blue colored sliders
Renew upper limit: 85
Renew growth rate: 0.1
Then, run the model and answer the following questions by finding the values from the resulting graphs.
Now, we will try geoengineering alternatives, beginning with the direct removal of carbon — DCR. Be sure to watch the video below that shows how to get the answers to problems 4-7.
Make the following changes to the model:
Green colored sliders
Conserve upper limit: 1
Conserve growth rate: 0.1
Blue colored sliders
Renew upper limit: 20
Renew growth rate: 0.1
Red colored sliders
DCR switch: On
DCR start time: 2030
Target atm pCO2: 470
DCR cost decline rate: 0.02
DCR growth rate: 0.02
DCR init: 5
Once you’ve made these changes, run the model, and answer the following.
Now we will try sulfate aerosol geoengineering — as before, the following video shows how to do this section.
Make the following changes to the model:
Green colored sliders
Conserve upper limit: 1
Conserve growth rate: 0.1
Blue colored sliders
Renew upper limit: 20
Renew growth rate: 0.1
Red colored sliders
DCR switch: Off
Orange colored sliders
Sulfate switch: On
Sulfate start time: 2030
Target T change: 2.0
Sulfate cost decline rate: 0.0
Once you’ve made these changes, run the model and answer the following.
Now, let’s step back and consider what we have found here. Clearly, the “do-nothing” scenario is the worst in terms of temperature change and costs. But how about the other scenarios, each of which gets us to a temperature change of close to 2°C by the end of the model run — which is the best? The following video shows how to answer questions 10-12.
In this module, you should have mastered the following concepts:
You should have read the contents of this module carefully, completed and submitted any labs, the Yellowdig Entry and Reply and taken the Module Quiz. If you have not done so already, please do so before moving on to the next module. Incomplete assignments will negatively impact your final grade.
Lab 12: Geoengineering Climate Model
Download this lab as a Word document: Lab 4: IPCC Projections for the 21st Century [384]. (Please download required files below.)
In this lab, we will be observing the projections for temperature and precipitation with the different emissions scenario. These projections were made in 2009 after the 4th report of the IPCC, but the overall patterns have not changed that much.
The goal of this lab is:
1. To observe the differences between the outcomes for temperature and climate change for the different scenarios, A2 (the worst case), A1B and B1 (the best case).
Load the IPCC Scenarios.kmz [385] file. This file allows you to observe the change in temperature (in degrees relative to the mean from 1961-1990) and precipitation (in mm/day relative to the mean from 1961-1990) in two different ways. You can look at the slider at the top left and observe the projections for a narrow time frame (like a month or a year) or you can look at the decadal averages using the buttons in the places box at right. For the purpose of this lab, we will use the decadal averages using the buttons. The temperature scale for the projections is very confusing and does not match the colors in the maps. Thus, we have created a new scale New Temperature Legend.kmz [386] which will cover the old file. You must also open that file as well.
Warning, the continent positions are very faint once the projection maps are loaded, so you will need to click the “turn layers off” button at the bottom of places on and off to view the locations of the continents. You might also want to have a map of the US and other continents on if you are unfamiliar with US geography.
Again, we begin with Practice Questions that will provide you with experience of the type of questions you will receive in the Graded Assignment. Make sure you take them and check your answers in Canvas before moving on to the Graded Assignment, which is entirely in Canvas. It is critical that you have a reliable Internet connection because you will get only one chance to take the lab.
Make sure both kmz files are loaded into Google Earth.
Before you turn on the Climate Projections, please center your screen over Australia at an altitude of 3000-4000 mi. Make sure you notice the difference between desert in the middle (browns and reds) and more lush areas on the coasts (greens). In the lab, you will be comparing model projections of how precipitation and temperature will change with the three different emissions scenarios, high emissions (A2), medium emissions (A1B) and low emissions (B1). Do the best you can to distinguish between the colors; we allow a range of answers based on the different possibilities. Find a map of Australia showing the major cities (or you can search for them in Google Earth).
Compare projections for 2000 decade with 2070 decade and with scenarios A2 and B Make sure you take into account the minor temperature increase in the 2000 decade seen in the light yellow colors (over the 1961-1990 decade). The light yellow color is generally about 1oC. Note not every temperature range is represented on the map. Match the colors above 3oC.
Compare projections for 2000 decade with 2090 decade and with just A. Make sure you take into account the minor precipitation increase in the 2000 decade (over the 1961-1990 decade). Note the amounts are in mm/day!
Now, look at the whole globe under the medium emissions scenario (A1B). I recommend you move to an altitude of 6000 to 7000 mi and use the “turn layer off” button on and off frequently to know where you are.
Download this lab as a Word document: Lab 6: Ocean Properties & Circulation [387] (Please download required files below.)
The lab this week is fairly short. The goal is to give you a more visual overview of surface ocean properties and circulation and their relationship with climate change. The questions will require you to have mastered the material in the module, so make sure you read and understood them completely.
The goals of this lab are to:
Before you begin answering the Practice questions, please load and get comfortable turning on and off the following kmz files:
Please Note: You will be downloading kmz files from the Internet this week, not directly from this Lab.
The map shows average Sea Surface Temperatures (SSTs). The colors are a little close together, so you will need to look carefully to determine trends. Note also the data are acquired from a bunch of different sources so there can be artificial “joins” on the map. We try to make sure these do not impact the questions. If certain areas don’t show the temperatures, keep rotating the globe.
Now, turn on the surface current kmz and answer the following questions. Note the white arrows are surface ocean currents and the blue and red lines are the global conveyor belt. If you don’t know where places are, you can search for them by adding a name in the box, top left.
Now turn on the chlorophyll kmz and answer the following questions. The colors again are very subtle; make sure you use the scale at bottom right - anything lighter blue or white is considered high productivity.
The future: Observe the El Niño visualization for the years 2015-2016 [391]. Since warm water holds less oxygen than cold water, determine how the expansions of dead zones occur from 2015 to 2016.
Download this lab as a Word document: Lab 9: Land Use Changes [392] (Please download required files below.)
In this lab, we will evaluate how land use has changed over the last century in many places around the globe. We will first deal with deforestation, next with land use in the developed world and undeveloped world.
Disappearing Forest kmz file [393]
Anthrome 1800 kml file [394]
Anthrome 1900 kml file [395]
Anthrome 2000 kml file [396]
Ellis et al. Table [397]
Part 1. Global Deforestation
Download the Disappearing Forest kmz file [393]. Make sure you have Deforestation rate (area) button open. Click on the country and see the data. First, make sure you look at the numbers in the top right corner of the boxes; this is the percent of lost natural forest between those years.
Countries/numbers in red are those that have lost significant forest cover between 1990 and 2005, those in green or brown have lost very little or gained (i.e., have made efforts to plant new forests).
You might also want to open Google maps in a separate window to help you find countries. Legend for land use is given below maps.
Answer the following questions.
Part 2. Global land use changes
Open the three Anthrome kml files: Anthrome 1800 kml file [394], Anthrome 1900 kml file [395], and Anthrome 2000 kml file [396].
Anthromes are the globally significant ecological patterns created by sustained interactions between humans and ecosystems, including urban, cropland, rangeland, and woodland anthromes. They provide a great basis for studying the impacts of humans on land use patterns. Here, we will study the distribution of anthromes between 1800 AD and 2000 AD maps and summarize the changes. Make sure the legends are open. Please open the Ellis et al. Table [397]that gives the definitions of the different land use categories. Note that the crops, pasture, and trees are percentages of land cover, the population is people per sq. kilometer.
You might also want to open Google maps in a separate window to help you find locations. Legend for land use is given below maps.
A. Areas with less than 20% pasture and a population of between 1 and 10 people per km2
B. Villages with over 100 people per km2 and crops covering over 20% of the area
C. Urban areas with more than 2500 people per km2
A. Areas with more than 100 people per km2 with irrigation covering more than 20% of the area
B. Areas with less than 100 people per km2 and crops covering less than 20% of the area
C. Areas with between 1 and 10 people per km2 with crops covering more than 20% of the area
Next, we will focus on land use change in China.
A. Increase
B. Decrease
A. Increase
B. Decrease
A. 1800
B. 1900
C. 2000
A. 1800
B. 1900
C. 2000
A. 1800
B. 1900
C. 2000
A. 1800
B. 1900
C. 2000
A. There is an increase in the area of rangelands at the expense of urban and villages and mixed settlements
B. There is a gradual increase in the area of villages and mixed settlements at the expense of seminatural areas
C. There is a decrease in the area of croplands and rangelands
Finally, open the IPCC projection kmz file (you used this in the lab in Module 4) and look at temperature and precipitation in the high emission scenario in 2090. You will need to flip between the projections and the 2000 anthrome map for China.
A. Falling precipitation
B. Rising temperature
A. North
B. South
This week’s lab concerns the threats to a critically endangered (CR) species and possible actions that can be taken to alleviate these threats. The lab will involve independent research and submission of a 400-500 word paper (about a page of single-spaced writing). The objective of the paper is to summarize the threats to a CR species and make recommendations about conservation that can help save the species. The lab will require a few hours of independent online research. You must submit your paper through Turnitin [398].
Start by carefully selecting a CR species from Red List: Discover Species [349]. Make sure the species is designated as CR!
You will want to choose a species about which much is known and for which conservation efforts are underway. You may not choose any species discussed in the Module. Your report must include the following (a paragraph on each).
You may use any online source but you must cite everything you have used. Especially useful sources include the following:
World Wildlife Fund [399]
The PEW Charitable Trusts [400]
African Wildlife Foundation [401]
Earth's Endangered Creatures [402]
National Wildlife Federation [403]
Information on Population | 20% |
Information on Threat | 25% |
Information on Conservation | 25% |
Writing style and grammar | 20% |
Referencing | 10% |
Submit your lab as a Word or PDF file in the Module 11 Lab Submission on Turnitin [398] by Midnight on the due date located in the Syllabus. Make sure your name is in the file name and at the top of the paper.
Links
[1] https://www.e-education.psu.edu/earth103/node/1055
[2] https://www.ipcc.ch/
[3] https://creativecommons.org/licenses/by-nc-sa/4.0/
[4] https://www.nasa.gov/
[5] https://www.whoi.edu/
[6] https://www.nsf.gov/
[7] http://www.nlcdd.org/
[8] https://www.youtube.com/channel/UComf_JKZQKV71PQBU3ODsQw
[9] https://www.flickr.com/photos/jcolman
[10] http://creativecommons.org/licenses/by-nc-nd/2.0/
[11] https://www.flickr.com/photos/66992990@N00/116512454
[12] http://www.ansp.org/
[13] https://en.wikipedia.org/wiki/Coccolithus
[14] https://creativecommons.org/licenses/by/2.5/
[15] https://www.flickr.com/photos/dedree/3521540314/in/photostream/
[16] https://www.flickr.com/photos/dedree/
[17] http://creativecommons.org/licenses/by/2.0/
[18] https://www.nps.gov/index.htm
[19] https://en.wikipedia.org/wiki/File:DevilsGolfCourse031411.jpg
[20] http://creativecommons.org/licenses/by-sa/3.0/
[21] https://commons.wikimedia.org/wiki/File:Drumlin_1777.jpg
[22] http://psl.noaa.gov/
[23] https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=13524
[24] https://en.wikipedia.org/wiki/File:Five_Myr_Climate_Change.svg
[25] https://creativecommons.org/licenses/by-nc-nd/2.0/
[26] https://commons.wikimedia.org/wiki/File:Vostok-ice-core-petit.png
[27] https://www.sciencedaily.com/releases/2010/03/100304142228.htm?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+sciencedaily+%28ScienceDaily%3A+Latest+Science+News%29
[28] http://www.swisseduc.ch/glaciers/
[29] https://www.flickr.com/explore
[30] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module01/Earth%20103%20Lab%201_PETM_WordDoc_2_0_0revised2023.docx
[31] https://www.e-education.psu.edu/earth103/orientation/googleearth
[32] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module01/PETM%20Data.kmz
[33] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module01/PETM%20Paleogeography.kmz
[34] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module01/Modern%20SST.kmz
[35] https://www.youtube.com/channel/UCU1QB1a5XJa_nTHD2lzr7Ew
[36] https://www.e-education.psu.edu/earth103/node/1056
[37] https://www.ncei.noaa.gov/products/land-based-station/us-historical-climatology-network
[38] https://commons.wikimedia.org/wiki/File:Short_Instrumental_Temperature_Record.png
[39] https://creativecommons.org/licenses/by-sa/3.0/deed.en
[40] http://en.wikipedia.org/wiki/Advanced_Microwave_Sounding_Unit
[41] https://berkeleyearth.org/
[42] http://data.giss.nasa.gov/gistemp
[43] https://svs.gsfc.nasa.gov/cgi-bin/details.cgi?aid=4964
[44] https://www.noaa.gov/
[45] https://en.wikipedia.org/wiki/Pepinster
[46] https://audiovisual.ec.europa.eu/en/reportage/P-051584
[47] https://commons.wikimedia.org/wiki/File:Visit_of_Ursula_von_der_Leyen,_President_of_the_European_Commission,_to_Rochefort_and_Pepinster_in_Belgium_12.jpg
[48] https://photojournal.jpl.nasa.gov/archive/PIA25597.mp4
[49] https://svs.gsfc.nasa.gov/4837
[50] http://www.nasa.gov
[51] http://www.nytimes.com/imagepages/2012/08/12/opinion/sunday/12drought-horizch.html?ref=sunday
[52] http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/regional_monitoring/palmer/2012/09-01-2012.gif
[53] https://commons.wikimedia.org/wiki/User:PRA
[54] https://en.wikipedia.org/wiki/Lake_Powell
[55] https://en.wikipedia.org/wiki/en:Creative_Commons
[56] https://creativecommons.org/licenses/by/2.5/deed.en
[57] https://www.flickr.com/photos/oxfameastafrica/
[58] https://creativecommons.org/licenses/by/2.0
[59] https://commons.wikimedia.org/wiki/File:Os-lahaina-town-fire.jpg
[60] https://en.wikipedia.org/wiki/Woolsey_Fire#/media/File:Woolsey_Fire_evacuation_from_Malibu_on_November_9,_2018.jpg
[61] https://commons.wikimedia.org/w/index.php?curid=74297315
[62] https://www.youtube.com/channel/UCT350j4u6alUKyF9eu2bGug
[63] https://commons.wikimedia.org/w/index.php?title=User:Helitak430&action=edit&redlink=1
[64] https://commons.wikimedia.org/wiki/File:Werombi_Bushfire.jpg#mw-jump-to-license
[65] https://commons.wikimedia.org/w/index.php?curid=85436322
[66] http://media.bom.gov.au/social/blog/2020/summer-2019-sets-new-benchmarks-for-australian-temperatures/
[67] http://creativecommons.org/licenses/by/3.0/au/
[68] https://en.wikipedia.org/wiki/Western_Washington
[69] https://commons.wikimedia.org/wiki/File:Washington_state_high_termperatures_June_28,_2021.png
[70] https://earthobservatory.nasa.gov/about/joshua-stevens
[71] https://gmao.gsfc.nasa.gov/
[72] https://en.wikipedia.org/wiki/User:TonyTheTiger
[73] https://creativecommons.org/licenses/by-sa/3.0
[74] https://nsidc.org/cryosphere/glaciers/questions/move.html
[75] http://earthobservatory.nasa.gov/
[76] https://www.nps.gov/glac/learn/nature/glacier-repeat-photos.htm
[77] http://www.swisseduc.ch/glaciers/alps/rhonegletscher/rhonegletscher-00-08-en.html
[78] http://nsidc.org/
[79] https://svs.gsfc.nasa.gov/vis/a030000/a031100/a031156/gris_with_velocity_200204-202304_1080p25.mp4
[80] https://svs.gsfc.nasa.gov/31156
[81] https://svs.gsfc.nasa.gov/vis/a030000/a031100/a031158/grace_monthly_anomaly_ais_black_vel_2020_crf27_720p.mp4
[82] https://svs.gsfc.nasa.gov/31158
[83] https://www.youtube.com/channel/UCP_hZt43bbGGf9ah6ATOvEg
[84] https://svs.gsfc.nasa.gov/4750
[85] http://earthobservatory.nasa.gov/Features/WorldOfChange/sea_ice.php
[86] https://nsidc.org/
[87] https://nsidc.org/cryosphere/sotc/sea_ice.html
[88] https://earthobservatory.nasa.gov/images/151093/antarctic-sea-ice-reaches-another-record-low
[89] https://www.flickr.com/photos/wyntuition/2912624792/in/photolist-5rnXSQ-59nHuc-RJ8C4A-9gimUm-9gftAv-Cv8zAU-CCo6zq-x6v5bQ-vREEFC-x6v4To-B5bozm-vRECvq-vzEwBo-53wh3y-5xGbNr-pShBjf-dUF8Nu-eAS9bQ-eARMqw-ctnd7J-XWAgFQ-9FYK9e-7e5iQk-Ydb7Ym-5H5b83-9G2tDm-bAznUU-5riAW6-4ivCVG-jkjNMX-dTMPwB-6ueJW7-9G2pn7-CCo7KS-9G2FvQ-9SDn4c-5A6VMR-9SGtLo-9FYsHM-9SGg6j-9G2tcJ-9FYxaP-9FYxWg-9FYtJP-9FYuJK-9G2smE-5H5ea3-9G2o3j-CdyJiJ-9FYzX8
[90] https://www.flickr.com/photos/wyntuition/
[91] https://creativecommons.org/licenses/by-sa/2.0/
[92] http://rammb-slider.cira.colostate.edu/?sat=goes-16&sec=full_disk&x=9295.5&y=5195&z=4&im=12&ts=1&st=20170825223036&et=20170825223036&speed=130&motion=loop&map=1&lat=0&p%5B0%5D=16&opacity%5B0%5D=1&hidden%5B0%5D=0&pause=20170825223036&slider=-1&hide_controls=0&mouse_draw=0&s=rammb-slider
[93] https://commons.wikimedia.org/wiki/File%3AHarvey_2017-08-25_2231Z.png
[94] https://commons.wikimedia.org/wiki/File%3ASupport_during_Hurricane_Harvey_(TX)_(50).jpg
[95] https://www.youtube.com/channel/UCHd62-u_v4DvJ8TCFtpi4GA
[96] https://upload.wikimedia.org/wikipedia/commons/d/da/Hurricane_Irma_on_Sint_Maarten_%28NL%29_05.jpg
[97] https://commons.wikimedia.org/w/index.php?curid=62283042
[98] https://creativecommons.org/publicdomain/zero/1.0/
[99] https://commons.wikimedia.org/wiki/User:Moni3
[100] https://commons.wikimedia.org/wiki/File%3AEverglades_Sawgrass_Prairie_Moni3.JPG
[101] https://creativecommons.org/licenses/by/3.0/deed.en
[102] https://www.usgs.gov/
[103] https://commons.wikimedia.org/wiki/File%3AUSGS_Southern_Everglades_Image_2001.jpg
[104] https://www.flickr.com/people/41464593@N02
[105] http://commons.wikimedia.org/wiki/File:Boat_on_US_1_(37266756945).jpg
[106] https://www.flickr.com/photos/cbpphotos/
[107] https://commons.wikimedia.org/wiki/File%3AHurricane_Maria_(2017)_170923-H-NI589-0007_(36602415074).jpg
[108] https://www.flickr.com/photos/usfwssoutheast/37499046562/in/photostream/
[109] https://www.fws.gov/
[110] https://www.flickr.com/photos/dfid/37361025335/
[111] https://www.flickr.com/photos/dfid/
[112] https://commons.wikimedia.org/w/index.php?curid=62796800
[113] http://creativecommons.org/licenses/by/2.0
[114] https://www.flickr.com/photos/rooseveltskerrit/37372721465/
[115] https://www.flickr.com/photos/rooseveltskerrit/sets/72157686922251424
[116] http://www.noaa.gov
[117] https://stock.adobe.com/contributor/290675/felix-mizioznikov?load_type=author&prev_url=detail
[118] http://stock.adobe.com
[119] https://stock.adobe.com/contributor/206108071/bilanol?load_type=author&prev_url=detail
[120] https://www.youtube.com/channel/UCroWPW0Wg6zZPyt42N89P7g
[121] https://www.flickr.com/photos/statefarm/51742138607/
[122] https://www.flickr.com/photos/statefarm/
[123] https://creativecommons.org/licenses/by/2.0/
[124] http://www.environmentminnesotacenter.org/
[125] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/Earth%20103%20Lab%202.0.docx
[126] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module02/Hurricane%20Tracks.kmz
[127] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module02/Temperature%20Anomalies.kmz
[128] http://www.esrl.noaa.gov/psd/data/composites/day/
[129] https://www.e-education.psu.edu/earth103/node/670
[130] https://upload.wikimedia.org/wikipedia/commons/a/aa/Annual_Average_Temperature_Map.jpg
[131] https://commons.wikimedia.org/wiki/File:Annual_Average_Temperature_Map.jpg
[132] http://en.wikipedia.org/wiki/Black_body
[133] https://en.wikipedia.org/wiki/Planck%27s_law
[134] https://journals.ametsoc.org/jcli/article/5/10/1140/35414/The-Annual-Cycle-in-Equatorial-Convection-and-Sea
[135] http://en.wikipedia.org/wiki/Electromagnetic_radiation
[136] http://profhorn.aos.wisc.edu/wxwise/climate/earthorbit.html
[137] http://www.lsbu.ac.uk/water/vibrat.html
[138] http://www.youtube.com/watch?v=Ot5n9m4whaw
[139] https://creativecommons.org/licenses/by-sa/4.0
[140] https://commons.wikimedia.org/wiki/File:Climate-tipping-points-en.svg
[141] https://exchange.iseesystems.com/public/davidbice/earth103-m3q1-4/index.html#page1
[142] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module03/Earth103%20Lab%203-RevSept2020.docx
[143] https://exchange.iseesystems.com/public/davidbice/earth103-m3q1-4
[144] https://exchange.iseesystems.com/public/davidbice/earth103-m5q5
[145] https://exchange.iseesystems.com/public/davidbice/earth103-m5q5/index.html#page1
[146] https://exchange.iseesystems.com/public/davidbice/earth103-m3-q6-7
[147] https://exchange.iseesystems.com/public/davidbice/earth103-m3-q6-7/index.html
[148] https://www.google.com/earth/
[149] https://en.wikipedia.org/wiki/Special_Report_on_Emissions_Scenarios
[150] https://commons.wikimedia.org/wiki/User:L.tak
[151] https://gpm.nasa.gov/data/imerg/precipitation-climatology#monthlyprecipitationclimatology
[152] https://en.wikipedia.org/wiki/Amazon_rainforest
[153] http://www.noaa.gov/
[154] https://www.youtube.com/channel/UCEik-U3T6u6JA0XiHLbNbOw
[155] http://www.youtube.com/watch?v=g78utcLQrJ4
[156] http://en.wikipedia.org/wiki/File:Gmelina_leaves_forest_floor.JPG
[157] https://en.wikipedia.org/wiki/File:Storflaket.JPG
[158] http://www.ldeo.columbia.edu/res/pi/CO2/carbondioxide/pages/air_sea_flux_2000.html
[159] http://en.wikipedia.org/wiki/File:White_cliffs_of_dover_09_2004.jpg
[160] http://en.wikipedia.org/wiki/Subduction
[161] http://en.wikipedia.org/wiki/File:Factory_in_China.jpg
[162] http://en.wikipedia.org/wiki/File:Los_Angeles_Pollution.jpg
[163] http://en.wikipedia.org/wiki/File:Global_Carbon_Emissions.svg
[164] https://www.youtube.com/watch?v=PJ9N01N2QuE
[165] http://en.wikipedia.org/wiki/File:GHG_per_capita_2000_no_LUC.svg
[166] https://exchange.iseesystems.com/public/davidbice/earth103-m5-2/index.html#page1
[167] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module05/Graded%20Lab%205-September22.docx
[168] https://exchange.iseesystems.com/public/davidbice/earth103-m5-1
[169] https://exchange.iseesystems.com/public/davidbice/earth103-m5-2
[170] http://visibleearth.nasa.gov/view.php?id=57723
[171] http://en.wikipedia.org/wiki/File:HMS_Beagle_by_Conrad_Martens.jpg
[172] https://creativecommons.org/licenses/by-sa/3.0/
[173] https://www.youtube.com/channel/UCFXww6CrLAHhyZQCDnJ2g2A
[174] http://edition.cnn.com/2012/06/11/tech/sea-orbiter/index.html?iid=article_sidebar
[175] https://mynasadata.larc.nasa.gov/lesson-plans/modeling-temperature-and-deep-ocean-currents
[176] https://www.youtube.com/channel/UC-87aDLv5WFJ83fxt21gsEQ
[177] http://www.aquarius.umaine.edu/cgi/gallery.htm
[178] http://en.wikipedia.org/wiki/File:Upwelling-labels-en.svg
[179] http://Anne.Neiblas@utas.edu.au
[180] mailto:Anne.Neiblas@utas.edu.au
[181] mailto:Toohey@colorado.edu
[182] http://creativecommons.org/licenses/by-sa/2.5/
[183] https://www.meted.ucar.edu/
[184] https://commons.wikimedia.org/wiki/File:AYool_SEAWIFS_annual.png
[185] https://en.wikipedia.org/wiki/User:Plumbago
[186] http://en.wikipedia.org/wiki/File:Antarctic_bottom_water_hg.png
[187] https://astrobiology.nasa.gov/
[188] https://commons.wikimedia.org/w/index.php?curid=6978151#/media/File:WOA05_GLODAP_pd_DIC_AYool.png
[189] http://slideplayer.com/slide/5202244/
[190] https://www.youtube.com/channel/UC_nQI8cozg0WIkCz-w6nRVA
[191] https://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensocycle/ensocycle.shtml
[192] https://www.youtube.com/channel/UCoMLUC63fQ1qvyqTFKVYknw?feature=emb_ch_name_ex
[193] https://lumcon.edu/
[194] https://www.youtube.com/channel/UC8SgBnHvY8wzrX3c0VcHfFg?feature=emb_ch_name_ex
[195] https://www.youtube.com/channel/UCJcoPrmCdVhGqOmPspWy2uA?feature=emb_ch_name_ex
[196] https://www.e-education.psu.edu/earth103/node/869
[197] http://www.mbari.org/
[198] https://www.youtube.com/channel/UCNia_X3a7HLxBSN-z57UGQw?feature=emb_ch_name_ex
[199] https://www.youtube.com/channel/UC-87aDLv5WFJ83fxt21gsEQ?feature=emb_ch_name_ex
[200] https://www.barrierreef.org/
[201] http://www.flickr.com/photos/dkeats/5628135789/
[202] https://www.youtube.com/channel/UCU1QB1a5XJa_nTHD2lzr7Ew?feature=emb_ch_name_ex
[203] http://seasonsinthesea.com/mar/kelp.shtml
[204] https://www.flickr.com/photos/48722974@N07/
[205] http://www.flickr.com/photos/48722974@N07/4479046686/
[206] http://creativecommons.org/licenses/by-nc-sa/2.0/
[207] https://oceanservice.noaa.gov/facts/coral_bleach.html
[208] https://creativecommons.org/licenses/by-nd/4.0/
[209] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module07/ReefsAllfin_0.kmz
[210] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module07/ReefsAllfin.kmz
[211] https://www.flickr.com/photos/noaaphotolib/
[212] https://www.flickr.com/photos/noaaphotolib/5017963451/
[213] https://creativecommons.org/licenses/by/2.0/deed.en
[214] https://www.pmel.noaa.gov/co2/file/Pteropod+shell+experiment
[215] http://www.theresilientearth.com/?q=content/marine-life-survived-8x-current-co2-levels
[216] http://www.whoi.edu/science/B/ecohab/work/ecohab2000.html
[217] http://oceandatacenter.ucsc.edu/PhytoGallery/Diatoms/pseudo%20nitzschia.html
[218] https://www.youtube.com/channel/UC6ZFN9Tx6xh-skXCuRHCDpQ?feature=emb_ch_name_ex
[219] https://www.youtube.com/channel/UC5jcv1eLSxuM5KCoPFq8JZg?feature=emb_ch_name_ex
[220] http://botany.si.edu/references/dinoflagellates/gambierdiscus_to.htm
[221] http://www.vims.edu/research/departments/eaah/programs/pfiesteria_research/resources/images/index.php
[222] https://www.vims.edu/research/units/legacy/pfiesteria_research/resources/images/index.php
[223] https://www.youtube.com/channel/UCBkWMXkD1g8ok7fs5vOAhaQ
[224] https://vancouversun.com/news/jellyfish-thrive-in-adverse-conditions-ubc-study-finds
[225] https://www.youtube.com/channel/UCeW5IMmy5uUzEv3f4ijV2cg
[226] https://creativecommons.org/licenses/by-nc-sa/2.0/
[227] http://water.org/
[228] https://www.e-education.psu.edu/earth103/node/889
[229] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/Lab%208.docx
[230] https://waterwatch.usgs.gov/index.php?id=ww_animation
[231] http://www.solpass.org/
[232] https://kunden.dwd.de/GPCC/Visualizer
[233] http://www.cpc.ncep.noaa.gov/products/precip/CWlink/ENSO/composites/EC_LNP_index.shtml
[234] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module08/MississippiStreamGagesUpdated.kmz
[235] https://water.usgs.gov/edu/earthgwaquifer.html
[236] http://www.co.pepin.wi.us/
[237] https://www.flickr.com/photos/golf_pictures/
[238] https://www.flickr.com/photos/golf_pictures/5111143657/
[239] https://www.youtube.com/channel/UC5O114-PQNYkurlTg6hekZw?feature=emb_ch_name_ex
[240] http://www.groundwateruk.org/Image-Gallery.aspx
[241] https://creativecommons.org/share-your-work/public-domain/cc0/
[242] https://www.youtube.com/channel/UCXzicKpfSXa90q54SE6Nx5Q?feature=emb_ch_name_ex
[243] https://www.youtube.com/channel/UCV3Nm3T-XAgVhKH9jT0ViRg
[244] http://www.flickr.com/photos/jaxstrong/
[245] https://www.discovermagazine.com/environment/2-degrees-of-separation
[246] https://www.youtube.com/channel/UCqTDtf5WMrE7z8JCK_FrKAg?feature=emb_ch_name_ex
[247] http://www.mdba.gov.au/
[248] http://www.nswfarmers.org.au/
[249] https://factsanddetails.com/china/cat10/sub64/item399.html#chapter-6
[250] https://www.youtube.com/channel/UCF8Uhj557ROtKoMm8xdZyKA?feature=emb_ch_name_ex
[251] https://www.youtube.com/channel/UCHXPqS4M2OcOUGYEtPM3CxQ?feature=emb_ch_name_ex
[252] https://www.youtube.com/channel/UCdw3GhAw0c7oPeTDV5gn6Zg?feature=emb_ch_name_ex
[253] https://commons.wikimedia.org/wiki/User:Chrkl
[254] http://en.wikipedia.org/wiki/Reverse_osmosis
[255] https://www.youtube.com/watch?v=5CADLfXOhkU
[256] https://www.youtube.com/@VICENews
[257] http://earthobservatory.nasa.gov/Features/WorldOfChange/lake_powell.php
[258] https://www.nasa.gov/topics/earth/features/ndvi_locusts.html
[259] https://commons.wikimedia.org/wiki/User:Cmglee
[260] http://en.wikipedia.org/wiki/Day_of_Seven_Billion
[261] https://www.youtube.com/channel/UCpVm7bg6pXKo1Pr6k5kxG9A?feature=emb_ch_name_ex
[262] http://en.wikipedia.org/wiki/File:Darfur_refugee_camp_in_Chad.jpg
[263] http://www.flickr.com/photos/un_photo/4421127032/sizes/o/
[264] http://www.nairaland.com/582396/biafra-nigerian-civil-war-pictures/1
[265] https://www.youtube.com/channel/UCIUYbdQt-ll9ICPE3vXUCgA
[266] http://documents.worldbank.org/curated/en/606251468324545952/pdf/520610WP0Agrui1round0note101PUBLIC1.pdf
[267] https://www.flickr.com/photos/dmahendra/
[268] http://www.flickr.com/photos/dmahendra/3743846699/
[269] https://ciat.cgiar.org/
[270] https://www.flickr.com/photos/pagedooley/
[271] https://www.flickr.com/photos/pagedooley/2641336835/sizes/o/
[272] https://en.wikipedia.org/wiki/User:Paulkondratuk3194
[273] https://commons.wikimedia.org/wiki/File:Irrigation1.jpg#file
[274] https://en.wikipedia.org/wiki/Arthur_Rothstein
[275] https://en.wikipedia.org/wiki/Farm_Security_Administration
[276] https://en.wikipedia.org/wiki/File:Farmer_walking_in_dust_storm_Cimarron_County_Oklahoma2.jpg#/media/File:Farmer_walking_in_dust_storm_Cimarron_County_Oklahoma2.jpg
[277] https://www.youtube.com/channel/UClzCn8DxRSCuMFv_WfzkcrQ?feature=emb_ch_name_ex
[278] http://Bitsofscience.org
[279] https://www3.epa.gov/
[280] https://creativecommons.org/licenses/by/4.0/
[281] https://www.un.org/en/
[282] https://nca2009.globalchange.gov/projected-temperature-change/index.html#footnote3_3crj4pl
[283] https://nca2009.globalchange.gov/projected-change-north-american-precipitation-2080-2099/index.html
[284] http://www.flickr.com/photos/hamed/
[285] http://www.flickr.com/photos/jantik/111670098/sizes/z/
[286] https://www.flickr.com/photos/nickyfern/404443201/sizes/z/
[287] http://en.wikipedia.org/wiki/File:Chilean_purse_seine.jpg
[288] http://creativecommons.org/licenses/by-sa/3.0/us/
[289] https://www.flickr.com/photos/48722974@N07/4523955644/sizes/z/
[290] https://www.youtube.com/channel/UC6JvI7uKCP7BFM5TafNctkA?feature=emb_ch_name_ex
[291] https://www.flickr.com/photos/47108884@N07/5478682616/sizes/z/
[292] http://creativecommons.org/licenses/by-sa/2.0/
[293] https://ofrf.org/staff/mark-schonbeck/
[294] https://www.flickr.com/photos/argonne/
[295] http://www.flickr.com/photos/argonne/3812650188/sizes/z/
[296] https://www.flickr.com/photos/siftnz/
[297] http://www.flickr.com/photos/siftnz/4169699511/sizes/z/
[298] https://www.youtube.com/watch?v=xL9F7OS5Uww
[299] https://www.youtube.com/channel/UC86dbj-lbDks_hZ5gRKL49Q
[300] http://www.flickr.com/photos/jeffcouturier/
[301] https://www.flickr.com/photos/richie_in_london/2520063950
[302] https://www.flickr.com/people/68824346@N02
[303] https://commons.wikimedia.org/wiki/File:Agricultural_Fields,_Wadi_As-Sirhan_Basin,_Saudi_Arabia_-_NASA_Earth_Observatory.jpg#/media/File:Agricultural_Fields,_Wadi_As-Sirhan_Basin,_Saudi_Arabia_-_NASA_Earth_Observatory.jpg
[304] http://www.flickr.com/photos/bytemarks/
[305] http://www.flickr.com/photos/bytemarks/5210687107/in/photostream/
[306] https://www.flickr.com/photos/eu_echo/
[307] https://www.flickr.com/photos/69583224@N05/8022566649/
[308] http://www.flickr.com/photos/crossroads_foundation/
[309] https://commons.wikimedia.org/wiki/User:Jashuah
[310] http://en.wikipedia.org/wiki/File:FAO_Food_Price_Index.png
[311] https://creativecommons.org/licenses/by-sa/3.0/us/
[312] http://www.flickr.com/photos/zedworks/
[313] http://www.flickr.com/photos/zedworks/2940260897/in/photostream/
[314] http://www.cmar.csiro.au/sealevel/sl_about_intro.html
[315] https://www.youtube.com/channel/UCAY-SMFNfynqz1bdoaV8BeQ?feature=emb_ch_name_ex
[316] https://www.youtube.com/channel/UCob0WFbLuk2BaiJ9RFscNaw?feature=emb_ch_name_ex
[317] https://www.youtube.com/channel/UCXCOzJafcdIr95xGMNyWQgw?feature=emb_ch_name_ex
[318] http://asterweb.jpl.nasa.gov/
[319] https://www.youtube.com/channel/UCtZdUYUZr493AUh_EInBYxQ?feature=emb_ch_name_ex
[320] https://commons.wikimedia.org/wiki/File:Thwaits_Glacier.jpg
[321] http://www.co-ops.nos.noaa.gov
[322] http://www.psmsl.org/data/obtaining/rlr.monthly.plots/447_high.png
[323] https://en.wikipedia.org/wiki/User:Dragons_flight
[324] http://www.globalwarmingart.com/
[325] http://sealevel.jpl.nasa.gov/files/archive/technology/P38232.jpg
[326] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module09/NewOrleansElevation_0.jpg
[327] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module10/neworleans_rad_2005.jpg
[328] http://stratum@uga.edu
[329] http://strata.uga.edu/sequence/
[330] https://www.youtube.com/channel/UCeXH8GZyV3sVqAr45AvupOA?feature=emb_ch_name_ex
[331] https://www.youtube.com/channel/UCc8UGBJ8FHsp-zReFQjChqQ?feature=emb_ch_name_ex
[332] https://commons.wikimedia.org/wiki/User:B137
[333] https://www.flickr.com/photos/13748835@N00/
[334] https://www.flickr.com/photos/13748835@N00/3599476780/
[335] https://www.youtube.com/channel/UCK0z0_5uL7mb9IjntOKi5XQ
[336] https://www.youtube.com/channel/UCZyZP0de8jysluFzeFmH8hg?feature=emb_ch_name_ex
[337] https://www.youtube.com/channel/UCVznNzba4N9n8I-sZPbhxAQ?feature=emb_ch_name_ex
[338] https://www.youtube.com/channel/UC4AH2XGXPfm38ytfwebyXeQ?feature=emb_ch_name_ex
[339] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/Lab%2010.docx
[340] https://www.youtube.com/channel/UCjaDE07wjbvAJzAHiEDmbPA
[341] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module10/PSMSL%20Tide%20guage.kml
[342] https://coast.noaa.gov/slr/
[343] https://commons.wikimedia.org/wiki/File:Extinction_Intensity.svg
[344] https://en.wikipedia.org/wiki/File:Extinction_intensity.svg#/media/File:Extinction_intensity.svg
[345] http://en.wikipedia.org/wiki/File:Cane-toad.jpg
[346] https://en.wikipedia.org/wiki/user:LiquidGhoul
[347] https://en.wikipedia.org/wiki/user:Tnarg_12345
[348] http://en.wikipedia.org/wiki/File:Bufo_marinus_distribution.png
[349] http://www.iucnredlist.org/
[350] https://en.wikipedia.org/wiki/User:Pengo
[351] http://en.wikipedia.org/wiki/Extinct_in_the_Wild
[352] https://creativecommons.org/licenses/by/2.5
[353] http://en.wikipedia.org/wiki/File:Biodiversity_Hotspots.svg
[354] https://creativecommons.org/licenses/by-sa/2.5/deed.en
[355] https://creativecommons.org/licenses/by-sa/2.0/deed.en
[356] https://creativecommons.org/licenses/by-sa/1.0/deed.en
[357] http://en.wikipedia.org/wiki/File:Varroa_destructor_on_honeybee_host.jpg
[358] https://creativecommons.org/share-your-work/public-domain/
[359] https://en.wikipedia.org/wiki/User:Pollinator
[360] http://en.wikipedia.org/wiki/File:Bee_migration_9045.JPG
[361] https://commons.wikimedia.org/w/index.php?title=User:Lgeissl&action=edit&redlink=1
[362] https://en.wikipedia.org/wiki/File:Verunglueckter_Rotmilan.jpg
[363] https://en.wikipedia.org/wiki/File:Northern_Spotted_Owl.USFWS-thumb.jpg
[364] https://en.wikipedia.org/wiki/en:Roelant_Savery
[365] https://en.wikipedia.org/wiki/File:Edward%27s_Dodo.jpg
[366] https://en.wikipedia.org/wiki/User:Froggydarb
[367] https://www.flickr.com/people/14405058@N08
[368] https://creativecommons.org/licenses/by-sa/2.0
[369] https://commons.wikimedia.org/wiki/User:Cotinis
[370] https://creativecommons.org/licenses/by-sa/2.5
[371] https://calphotos.berkeley.edu/cgi/photographer_query?query_src=photos_index&where-name_full=Franco+Andreone&one=T
[372] http://en.wikipedia.org/wiki/File:Amphibians.png
[373] https://www.usgs.gov/media/images/tracking-polar-bears
[374] http://en.wikipedia.org/wiki/File:Ursus_maritimus_us_fish.jpg
[375] http://en.wikipedia.org/wiki/File:Polar_bear_range_map.png
[376] http://alaska.usgs.gov/
[377] https://www.e-education.psu.edu/earth103/node/1068
[378] https://www.youtube.com/channel/UCFJNcE0iHj7P6dhp5iCZRLg?feature=emb_ch_name_ex
[379] https://en.wikipedia.org/wiki/User:Solipsist
[380] http://en.wikipedia.org/wiki/File:Comberton_village_green.jpg
[381] https://www.climate.gov/news-features/event-tracker/astounding-heat-obliterates-all-time-records-across-pacific-northwest
[382] https://en.wikipedia.org/wiki/List_of_countries_by_carbon_dioxide_emissions_per_capita
[383] https://exchange.iseesystems.com/public/davidbice/earth103-m12
[384] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/Lab%204.docx
[385] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module04/IPCC%20Scenarios.kmz
[386] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module04/New%20Temperature%20Legend.kmz
[387] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/Module%206%20Lab.doc
[388] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module05/World%20and%20Regional%20Sea%20Surface%20Temperature_0.kmz
[389] https://ti.arc.nasa.gov/m/project/planetary/kml/chl.kmz
[390] http://www.impacttectonics.org/KMLZs/2016%20GCH%20Ocean%20currents.kmz
[391] http://svs.gsfc.nasa.gov/4544
[392] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/Lab%209.docx
[393] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module09/disappearing_forests.kmz
[394] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module09/anthrome1800.kml
[395] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module09/anthrome1900.kml
[396] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module09/anthrome2000.kml
[397] https://www.e-education.psu.edu/earth103/sites/www.e-education.psu.edu.earth103/files/module09/Ellis%20et%20al.%20Table.pdf
[398] http://turnitin.psu.edu/
[399] http://worldwildlife.org/
[400] http://www.pewtrusts.org/en/topics/oceans
[401] http://www.awf.org/wildlife-conservation
[402] http://www.earthsendangered.com/index.asp
[403] https://www.nwf.org/Educational-Resources/Wildlife-Guide/Understanding-Conservation/Endangered-Species