In this lesson, we will continue with our investigation of climate models. We will investigate more complex models of the climate system than in the previous lesson. We will first investigate a slightly more complex version of the EBM encountered in Lesson 4, where we explicitly insert an atmospheric layer above the Earth's surface. We will consider models that represent the full three-dimensional geometry of the Earth system, and model atmospheric winds and ocean currents, patterns of rainfall, and drought, and other key attributes of the climate system. We will also explore the concept of 'fingerprint detection'—a method that allows us to compare model predictions against observations to discern whether or not the signal of anthropogenic climate change can already be detected.
By the end of Lesson 5, you should be able to:
Please refer to the Syllabus for specific time frames and due dates.
The following is an overview of the required activities for Lesson 5. Detailed directions and submission instructions are located within this lesson.
If you have any questions, please post them to our Questions? discussion forum (not e-mail), located under the Home tab in Canvas. The instructor will check that discussion forum daily to respond. Also, please feel free to post your own responses if you can help with any of the posted questions.
We can increase the complexity of the zero-dimensional model by incorporating the atmospheric greenhouse effect in a slightly more realistic manner than is embodied by the ad hoc gray body model explored in the previous lecture. We now include an explicit atmospheric layer in the model, which has the ability to absorb and emit infrared radiation.
We will approximate the emissivity of Earth's surface as one, that is, we will assume that The Earth's surface emits radiation as a black body. The atmosphere itself has a lower but non-zero emissivity, i.e., it emits a fraction of what a black body would emit at a given temperature. This emissivity is due to property of greenhouse gases within the atmosphere, and we will denote this atmospheric emissivity by (not to be confused with the epsilon of previous lessons which was associated with the emissivity of Earth's surface, which we are approximating here as unity!). According to Kirchhoff's Law, at thermal equilibrium, the emissivity of a body equals its absorptivity (i.e., the fraction of incident radiation that is absorbed by the body). Therefore, is also a measure of the efficiency of the atmosphere's absorption of any infrared radiation (IR) incident upon it. IR radiation that is not absorbed by the atmosphere is transmitted through it; therefore, is the fraction of incident IR radiation that is transmitted through the atmosphere without being absorbed.
An of zero corresponds to no greenhouse effect at all, while an of unity corresponds to a perfect IR absorber, i.e., a perfect greenhouse effect. The true greenhouse effect is, of course, somewhere in between, i.e.,
We denote the effective albedo of the Earth system (i.e., the portion of incoming solar radiation immediately reflected back to space) as A, and we will now distinguish between the atmospheric temperature (which we will envision as representing the mid-troposphere, somewhere around 5.5 km above the surface where roughly half the atmosphere by mass lies below) and the surface temperature TS.
is related to, but not equivalent to, another quantity known as the effective radiating temperature, which we will denote as . is the temperature the Earth would have if it were a black body, i.e., if there were no greenhouse effect. It can be thought of as the temperature at the effective height in the atmosphere from which Earth is radiating infrared radiation back to space. In the limit of a perfectly emissive atmosphere ( ), as you can verify from our mathematical treatment below, we would have the equality = .
You may recall from our earlier discussion (in Lesson 1 [2]) of the vertical structure of the atmosphere, that atmospheric temperatures cool on average roughly in the troposphere — what is known as the standard lapse rate.
For the approximate current value of the solar constant S = 1370 W / m2, we saw in Lesson 4 that the black body temperature, i.e., the effective radiating temperature , is roughly 255 K [3].
Given that the Earth's average surface temperature is = 288 K, the effective radiating temperature is = 255 K, and the standard lapse rate is , can you determine the effective radiating level in the atmosphere?
Click for answer.
We can now express the condition of energy balance at each level in our simplified model of the Earth:
Balancing incoming and outgoing radiation at the top of the atmosphere gives:
Balancing incoming and outgoing radiation from the atmospheric layer gives :
Finally, balancing incoming and outgoing radiation at the surface gives:
Solving the system of equations for and gives:
Or more simply
Let us use the standard values of A = 0.3 and S = 1370 W / m2.
Using (7) and = 288 K, we also get the result = 242 K. This is modestly lower than the effective radiating temperature = 255 K, indicating that it is found at about 5.5 km — a modestly higher level in the atmosphere than 5.1 km that you calculated earlier.
Of course, this model is still rather simplistic. For one thing, it only takes into account short wave and long wave radiation. We haven't accounted for important processes involved in the energy budget of the actual atmosphere and surface, which includes convection, latent heating, and the effect of large-scale motion.
We can nonetheless add some further realism to the model by incorporating some of the feedbacks we have discussed [4] previously. In Problem Set 4 you will investigate a slightly more sophisticated version of the standard one-layer model. The model allows for the contribution of clouds to both the Earth's albedo and the longwave absorptive properties of the atmosphere in a very rough way. It also accounts for the positive ice albedo and water vapor feedbacks in a very rough manner.
Each of the feedbacks in the model will be expressed in the form of a feedback factor that you can vary. A feedback factor measures the relative magnitude of a feedback in terms of the amplitude of the response relative to the original forcing. If the response is equal in magnitude to the original forcing, there is no feedback, and the feedback factor is zero. If the response is double that of the original forcing, the feedback factor is one. For example, if a warming of 1°C due to doubling alone causes an increase in water vapor content that adds an additional equilibrium warming of 2°C, so that the net warming is 3°C, the water vapor feedback factor would be two. Feedback factors can be specified for a particular feedback (e.g., the water vapor feedback), or for the sum over all feedbacks under consideration (e.g., water vapor feedback, ice albedo feedback, and cloud feedback). For example, suppose that the initial 1°C warming, which led to 2°C warming due to water vapor feedback, also led to an increase primarily in low cloud cover, which added a relative cooling of -0.5°C, and a melting of ice, which added an additional relative warming of 1°C. Then the cloud feedback factor would be -0.5, the ice albedo feedback factor would be 1.0, and the net feedback factor would be 2 - 0.5 + 1 = 2.5! Alternatively, we could compute the overall feedback factor by taking the total warming (initial 1°C warming + 2°C - 0.5°C + 1°C = 3.5°C) divided by the initial warming, minus one, i.e., (3.5°C/1°C) - 1 = 2.5. The equilibrium climate sensitivity in this case would be 3.5.
While we have measured the feedback factors in terms of the temperature responses, one could also compute these factors in terms of the associated radiative forcing. For example, we know from earlier in the course that the radiative forcing due to doubling is roughly 3.7W/m2. Suppose that the increased greenhouse forcing associated with the water vapor feedback led to an additional downward long wave radiative flux of 7.4 W/m2 (and let us assume for this example that the other feedbacks are zero). Then the water vapor feedback factor would be 7.4/3.7 which is, again, two. The total downward radiative forcing would be 3.7W/m2 +7.4W/m2 =11.1 W/m2 total downward and the overall feedback factor would be 11.1/3.7 - 1 = 2!
PRESENTER: Here is our One Layer Energy Balance Model. Right now it's set for the default settings of the various feedback factors.
The cloud feedback factor is set at a modestly negative value of minus 0.83. The water vapor feedback factor is 2. And the ice albedo feedback factor is 0.5. So these two, the water vapor feedback and the ice albedo feedback, are set at positive values. The cloud ready to feedback is set at a negative value.
And these values are roughly equal to the best current estimates of what those feedback factors are. And so using those default settings gives us sort of a mid-range climate sensitivity. And you'll look at that in your problem set.
So you can calculate the climate sensitivity of course by varying the CO2. And the CO2 can be varied with this lever here. Pre-industrial levels, 280 parts per million. Obviously 560 PPM is twice pre-industrial. If you like, you can even set values outside these ranges by going to the box down below. For example, I could set the CO2 level at 700 PPM.
And we can see the initial temperature-- surface temperature-- to 88k for the standard default settings. The new surface temperature, 292, that was a 4.1 degree warming of the surface. The atmosphere itself-- the mid troposphere-- warmed somewhat less, 3.4k.
We can see what the longwave and the shortwave forcings are. So these are estimates of the radiative forcing due to the combination of the direct influence of changing the CO2 levels, plus the various feedbacks-- the water vapor feedback, increasing the greenhouse gas concentration, giving us longwave forcing, that adds to the longwave forcing from the increase in CO2 alone.
The ice albedo feedback playing into the shortwave forcing. The cloud radiative feedback can influence both a combination of shortwave and long wave forcing. There's the cloud albedo effect, which tends to be a negative feedback. But there's also the greenhouse gas-like properties of clouds, the infrared absorbing properties of clouds which gives us a positive feedback. And as we vary this feedback factor, we can transition from where the negative cloud radiative feedbacks dominate to where the positive cloud feedback dominate.
We can see how the albedo changes as we vary the shortwave feedbacks. We can see how the atmospheric emissivity changes. For example, as we change the CO2 level, the default emissivity in this model being 0.77-- the value that gives us a surface temperature about 280k, the current best estimate of Earth's surface temperature.
Finally, if we like we can vary the solar constant. We can vary the initial albedo-- Earth's planetary albedo-- which will of course be modified as we change some of the feedback factors.
So that's how it works. You're going to explore this model further in your problem set.
There are many ways one can generalize upon the zero-dimensional EBM. As we saw in the previous section, we can try to resolve the additional, vertical degree of freedom in the climate system through a very simple idealization—the one layer generalization of the zero-dimensional EBM. If for no other reason than the fact that the incoming solar radiation is symmetric with respect to longitude, but varies quite dramatically with latitude, the latitudinal degree of freedom is the next most important property to resolve if we wish to obtain further insights into the climate system using a still relatively simple and tractable model.
That brings us to the concept of the one-dimensional energy balance model, where we now explicitly divide the Earth up into latitudinal bands, though we treat the Earth as uniform with respect to longitude. By introducing latitude, we can now more realistically represent processes like ice feedbacks which have a strong latitudinal component, since ice tends to be restricted to higher latitude regions.
Recall that we had, for the linearized zero-dimensional gray body EBM, a simple balance [3]:
Generalizing the zero-dimensional EBM, we can write a similar radiation and energy balance equation for each latitude band i:
We have now introduced some extremely important generalizations. The temperature , albedo , and incoming solar radiation are now functions of latitude, allowing us to represent the disparity in incoming shortwave radiation between equator and pole, and the strong potential latitudinal dependence of albedo with latitude—in particular, when the temperature for a particular latitude zone falls below freezing, we represent the increased accumulation of snow/ice in terms of a higher albedo. The global average temperature is computed by an appropriate averaging of the temperatures for the different latitude bands .
Recall that the disparity in received solar radiation between low and high latitudes leads to lateral heat transport [5] over the surface of the Earth by the atmospheric circulation and ocean currents. In the absence of lateral transport, the poles will become increasingly cold and the equator increasingly warm. Clearly, we must somehow represent this meridional heat transport in the model if we expect realistic results. This can be done through a very crude representation of the process of heat advection through a term that is proportional to the difference between the temperature, where is some appropriately chosen constant, and is the global average temperature. This term represents processes associated with lateral heat advection that tends to warm regions that are colder than the global average and cool regions that are warmer than the global average.
This gives the final form of our one-dimensional EBM:
The model is complex enough now that there is no way to simply write down the solution anymore. But we can solve the model mathematically, through a very simple and primitive form of something we will encounter much more of in the future—a numerical climate model.
One of the most important problems that was first studied using this simple one-dimensional model was the problem of how the Earth goes into and comes out of Ice Ages. Use the links below to open the demonstration, which is in 3 parts.
Finally, we come to the so-called General Circulation Models or GCMs. GCMs attempt to describe the full three-dimensional geometry of the atmosphere and other components of Earth's climate system. Atmospheric GCMs numerically solve the equations of physics (e.g., dynamics, thermodynamics, radiative transfer, etc.) and chemistry applied to the atmosphere and its constituent components, including the greenhouse gases. In more primitive GCMs (the earlier generation models), the role of the ocean was treated in a very basic way, e.g., as a simple slab of water where only the thermodynamic role of the ocean was accounted for.
Current generation climate models typically include an ocean that plays a far more active role in the climate system. The major current systems are modeled, as is their direct role in transporting heat poleward. When the dynamics of the ocean and its interactions with the atmosphere are explicitly resolved by a climate model, the model is referred to as Atmosphere-Ocean GCM, or AOGCM, or sometimes simply a coupled model. Most state-of-the-art climate modeling centers today run AOGCMs. In addition, many state-of-the-art climate models today include a detailed description of the hydrological cycle (which couples atmospheric, terrestrial, and ocean reservoirs of water and the flows between these reservoirs) as well as the role of terrestrial biosphere, the continental ice sheets, and even the ocean's carbon cycle and its interactions with the ocean and the atmosphere.
Unlike simpler climate models like EBMs, GCMs and AOGCMS can be used to study a variety of climate attributes other than surface temperature, such as atmospheric temperature profiles, rainfall, atmospheric circulation, ocean circulation, wind patterns, snow and ice distributions, and many other variables that are part of the global climate system.
The EdGCM project, funded by the U.S. National Science Foundation and spearheaded by scientists associated with NASA Goddard Institute of Space Studies (GISS), uses the GCM originally used in a number of famous experiments (which we will review later in this lesson) by climate scientist James Hansen, Director of GISS. This model was developed in the 1980s and is primitive by modern standards, but it includes much of the important physics that is in current state-of-the-art climate models and it is far less computationally intensive. The scientists at EdGCM have ported the model into a format that can be run on a simple desktop or laptop computer (both PC and Mac). Originally it was free, but to cover expenses for the project, a minor fee is now required for download. Your course author has downloaded EdGCM onto his own laptop (MacBook Pro) and is now going to show you the results of several experiments he has run.
James Hansen [8] is a well known climate scientist who formerly directed NASA's Goddard Institute for Space Studies. [9]
He was the first climate scientist to testify in the U.S. Congress that human-caused climate change had indeed arrived, back during the hot summer of 1988. Today, as far greater evidence has amassed, his early comments appear especially prescient.
During his 1988 congressional testimony, Hansen showed the results of simulations he had performed using the NASA GISS GCM—the very same climate model explored in the EdGCM experiments of the previous section. These simulations included not only historical simulation of past climate changes, but three possible projections of future warming that depended on different possible future fossil fuel utilization scenarios.
Yogi Berra is quoted as having once said, "predictions are hard—especially about the future". Indeed, there is no better test than making a prediction about the future, and looking back and seeing how it panned out. This sort of post hoc validation is often done with numerical weather models. It's more difficult to do with climate models, however, because you have to wait not days or weeks, but years to see how the prediction actually measured up.
Hansen's 1988 simulations, in this regard, can be viewed as one of the great validation experiments in climate modeling history. In these experiments, Hansen included a high, medium, and low fossil fuel future emissions scenario, corresponding to the green, blue, and purple curves respectively. As it turns out, our actual fossil fuel emissions scenario during the two decades subsequent to Hansen's 1988 projections, has corresponded most closely to his middle scenario, the blue curve. And as you can see from the subsequent observations (the red curve), his prediction for that scenario quite closely matched the observed warming.
Now, you may have noticed, however, that this model simulation didn't capture the observed multi-year cooling in 1992. Is that a fault of the simulation?
No!—there is no way that James Hansen (or anyone for that matter) could have predicted the eruption of Mt. Pinatubo. And rather than proving a fault with the model, the Pinatubo eruption actually provided Hansen with another key test of the climate models. It takes about 6 months for the volcanic aerosol to spread out around the globe and begin to have a global cooling impact. This gave Hansen about six months to run his model and make a prediction, at the instant Pinatubo erupted. As you can see, he was able to predict quite accurately the short-term cooling of the globe by a bit less than 1°C that would result from this eruption. His model simulation (the black curve below) actually predicted a bit too much cooling (observations shown by the blue curve below). But that, too, wasn't his fault. El Niño events occur randomly in time, and there was no way to know that an extended El Niño event would occur in 1991-1993, offsetting some of the volcanic cooling: As you found in your first problem set, El Niño events warm the globe by about 0.1-0.2°C.
These examples may be the most striking examples of how the models have been validated, but they have been validated in many other, more mundane ways. In fact, the various reports of the IPCC include hundreds of pages of 'model validation' showing the models do a good job capturing the main fluxes of energy and radiative balances, the general circulation of the atmosphere and the major ocean current systems, the amplitude and pattern of the seasonal response to changing patterns of solar insolation, etc.
So, we have seen in the previous section that climate models have been used to make some very successful predictions in the past, and there is reason to take them seriously. Can we use these models to go a step further than we already have? We have seen in previous lessons that modern-day climate change appears anomalous and without any obvious precedent in the historical past. That alone does not establish that the changes that we are seeing—warming of the Earth's surface, and many other changes—are due to human impacts. Using climate models, we can, however, address this issue of causality. We can use the models to investigate the hypothesis that the observed changes can be explained by nature alone, and the alternative hypothesis that they can only be explained by a combination of human and natural factors. Investigations employing more than 20 state-of-the-art climate models (see below) show that natural factors alone cannot explain the global temperature record of the past century—including the long-term warming trend—while human factors, combined with the natural factors, can.
Some might argue that this alone is not convincing evidence. Perhaps, for example, we simply have the trend in solar output wrong and the true trend in solar output closely resembles the trend in human impacts (i.e., greenhouse forcing + anthropogenic aerosols). Then we might be misinterpreting the goodness of the fit shown above.
Let us, for argument's sake, accept that criticism. Is there some other type of comparison of observations and model predictions that might be more robust in this situation? Well, we can try to take advantage of the fact that the patterns of response to different forcings might look different. It turns out that the surface expressions are not that different—the surface expression of warming due to solar output increases actually looks a fair amount like the pattern of surface warming due to greenhouse gas increases. The vertical patterns of temperature change, however, as we alluded to previously in the course [12], are expected to be quite different. They provide a true fingerprint to search for—and indeed, the process of using the expected patterns of response of different forcings to determine which forcings best explain the observed changes is known as fingerprint detection.
The vertical pattern of response to increasing greenhouse gas concentrations is one in which the troposphere warms (as we have seen in previous exercises), but the stratosphere cools at the expense of this tropospheric warming; greenhouse forcing is a zero-sum game and there is no increase in radiation at the top of the atmosphere, but merely a redistribution of energy and radiation within the atmosphere. The vertical pattern of temperature change we would expect for an increase in solar output, however, is one in which the entire atmosphere warms, from top-to-bottom, as there is an increase in the received radiation at the top of the atmosphere which warms the entire atmospheric column. The pattern of temperature response to an explosive volcanic eruption is yet different from either of these patterns. From comparing the observed patterns of vertical temperature change to model simulations of the responses to each of these different factors, we find that only greenhouse surface warming exhibits the vertical pattern consistent with the model's predicted fingerprint (in fact, there is also the impact of ozone depletion on the cooling of the stratosphere, but even after accounting for that effect, the remaining trend can clearly only be accounted for by greenhouse forcing).
One of the key unknowns in the behavior of the climate, as we have seen, is the sensitivity—how much warming we can expect in response to a doubling of atmospheric concentrations. Current evidence suggests a most likely value of around 3.0°C warming, but there is—as we have seen—a wide range, anywhere from roughly 1.5°C to 4.5°C. Scientists attempt to try to constrain estimates of this key quantity by comparing model simulations with observations.
For example, scientists use models similar to the zero-dimensional EBMs we discussed in Lesson 4, driving them with the estimated changes in both natural factors (volcanoes and solar output) and human factors (greenhouse gas increases and sulfate aerosol emissions). Since the climate sensitivity is simply a parameter that can be changed in the model, scientists can do many simulations using different values of the climate sensitivity, and observe which values yield the best fit with the observations.
Such experiments can be done over the modern period back to the mid 19th century, during which observations of global mean temperature are available.
During the shorter period of the past half century when deep ocean temperature observations are available, experiments can be done to compare the model-simulated changes in ocean heat content with those that have been observed.
For the longer period of the past millennium during which temperature changes, as we have seen in Lesson 3 [13], have been documented based on climate proxy data—it is possible to compare simulated and observed changes over a longer time period, providing potentially tighter constraints on climate sensitivity. The computer model simulations in this case are driven by longer-term estimates (e.g., from ice core evidence) of natural (volcanic and solar) forcings as well as modern anthropogenic forcing:
Going further back in time, scientists compare climate model simulations of the cooling during the height of the Last Glacial Maximum (LGM) roughly 21,000 years ago resulting from lowered atmospheric CO2, increased continental ice cover, and altered patterns of solar insolation, and proxy evidence of ocean surface cooling derived from climate-sensitive surface dwelling organisms trapped in ocean sediment cores.
Finally, going even further back in time, into the deep geological past, scientists compare model results with geological evidence of past warm and cold periods.
The overall evidence from all of these different lines of evidence regarding both human-caused and natural climate changes over a broad range of time scales, is that the equilibrium climate sensitivity likely falls within the range of 1.5°C to 4.5°C for doubling, with a most likely value of roughly 3°C warming.
Given the full array of available evidence from instrumental and paleoclimate proxy data, and the comparisons of this evidence with theoretical estimates, there is a very low likelihood of either a trivially small (e.g., 1.5°C or less) or extremely high (greater than 7°C) equilibrium climate sensitivity.
For this assignment, you will need to record your work on a word processing document. Your work must be submitted in Word (.doc or .docx) or PDF (.pdf) format so the instructor can open it.
The documents associated with this problem set, including a formatted answer sheet, can be found on CANVAS.
Each problem (#2 to #6) will be graded on a quality scale from 1 to 10 using the general rubric as a guideline. For this problem set, each problem is equally weighted. Thus, a score as high as 50 is possible, and that score will be recorded in the grade book.
The objective of this problem set is for you to work with some of the concepts and mathematics around one-layer energy-balance models (1-D EBMs) covered in Lesson 5. You may find Excel useful in this problem set, but you may use any software you wish, keeping in mind that the instructor only can provide help with Excel.
In this lesson, we further explored the use of theoretical models of the climate system. We found that:
You have finished Lesson 5. Double-check the list of requirements on the first page of this lesson [15] to make sure you have completed all of the activities listed there before beginning the next lesson.
Links
[1] https://www.ipcc.ch/report/ar6/wg1/downloads/report/IPCC_AR6_WGI_SPM_final.pdf
[2] https://www.e-education.psu.edu/meteo469/node/115
[3] https://www.e-education.psu.edu/meteo469/node/137
[4] https://www.e-education.psu.edu/meteo469/node/116
[5] https://www.e-education.psu.edu/meteo469/node/203
[6] https://www.youtube.com/channel/UCU1QB1a5XJa_nTHD2lzr7Ew?view_as=subscriber
[7] https://es.wikipedia.org/wiki/James_Hansen
[8] http://en.wikipedia.org/wiki/James_Hansen
[9] http://data.giss.nasa.gov/gistemp/
[10] https://www.e-education.psu.edu/meteo469/sites/www.e-education.psu.edu.meteo469/files/lesson05/pinatubo.gif
[11] http://www.meteor.iastate.edu/gccourse/
[12] https://www.e-education.psu.edu/meteo469/node/121
[13] https://www.e-education.psu.edu/meteo469/node/134
[14] https://www.e-education.psu.edu/meteo469/node/198
[15] https://www.e-education.psu.edu/meteo469/node/139