The links below provide an outline of the material for this lesson. Be sure to carefully read through the entire lesson before returning to Canvas to submit your assignments.
This lesson will discuss the beginnings of the oil industry in Pennsylvania in the 1850s, the driving force for the early oil production, the key players, and the challenges, booms, and busts. The emergence, role, and importance of Rockefeller and Standard Oil in the early development of the oil industry and corporate structure in America will also be discussed. This lesson will also introduce the emergence of competitive oil industries in Russia and the Dutch East Indies in the 1860s.
By the end of this lesson, you should be able to:
This lesson will take us one week to complete. Please refer to the Course Calendar for specific due dates. See the specific directions for the assignments below.
If you have any questions, please post them to our Questions? discussion forum (not email). I will check that discussion forum daily and respond as needed. While you are in the discussion forum, feel free to post your own responses if you, too, are able to help out a classmate.
Crude oil is one of the most economically mature commodity markets in the world. Even though most crude oil is produced by a relatively small number of companies, and often in remote locations that are very far from the point of consumption, trade in crude oil is robust and global in nature. Nearly 80% of international crude oil transactions involve delivery via waterway in supertankers. Oil traders are able to quickly redirect transactions towards markets where prices are higher.
Oil and coal are global commodities that are shipped all over the world. Thus, global supply and demand determines prices for these energy sources. Events around the world can affect our prices at home for oil-based energy such as gasoline and heating oil. Oil prices are high right now because of rapidly growing demand in the developing world (primarily Asia). As demand in these places grows, more oil cargoes head towards these countries. Prices in other countries must rise as a result. Political unrest in some oil-producing nations also contributes to high prices - basically, there is a fear that political instability could shut down oil production in these countries. OPEC, the large oil-producing cartel, does have some ability to influence world prices, but OPEC's influence in the world oil market is shrinking rapidly as new supplies in non-OPEC countries are discovered and developed.
As shown in Figure 1.1, the development of our current globalized oil market can be broken down into a few different stages. The first phase was marked largely by intra-company transactions with occasional inter-company "spot" sales. The second was defined by the emergence of OPEC and its attempts to influence an increasingly global oil trade for political ends. The third is defined by the commoditization of oil markets, with regional prices linked by inter-regional trade and the development of sophisticated financial instruments such as futures and option contracts, which we'll discuss in more detail below.
Our discussion of the world market for crude oil will be broken into a few sections. We will first focus on what's called the "spot" and "forward" market for crude oil. The term "spot market" generally refers to a short-term commodity transaction where the physical commodity changes hands very soon after the seller receives payment. Most retail consumer purchases are examples of spot transactions. When I buy a newspaper at the convenience store or a slice of pizza at the pizza shop, I get the product right after I pay for it. Forward markets refer to contracts where buyers and sellers agree up-front on a price for a commodity that will be delivered at some point in the future. When I subscribe to the newspaper and have it delivered to my house every day, I am signing a type of forward contract with the newspaper company. When I call the pizza shop and ask them to deliver a pizza to my house one hour or one day from now, then I am also engaging in a type of forward contract.
We will also discuss "futures" markets for crude oil. The difference between futures and forward markets can be confusing at times. The primary difference is that a futures contract is a highly standardized commodity sold through a financial exchange, rather than a highly customizable contract bought and sold through one-on-one transactions. Futures markets do have the advantage that they have been able to attract many more buyers and sellers than forward markets.
Finally, we will spend time in this lesson learning where to find data on global oil markets and discussing the issue of "peak oil" - whether world oil reserves are dwindling or whether there is actually enough to support rising oil consumption over the coming decades. There is more data on the web that you might think, but the difficulty is in knowing where to find exactly what you are looking for.
Through much of the first half of the 20th Century, the United States was the dominant world oil producer and was, in fact, a net exporter. By the early 1900s, a bilateral "world" market for oil had developed, but the world price was simply the U.S. price plus the cost of freight. Russia's production did not lag far behind U.S. production during this time, but their relatively closed economy precluded much trade with the rest of the world. Table 1.1 shows the leading crude-oil producers up to 1970.
Year | U.S. | Mexico | Venezuela | Russia | Indonesia | Middle East |
---|---|---|---|---|---|---|
1910 | 575 | 10 | 0 | 170 | 30 | 0 |
1920 | 1,214 | 430 | 1 | 70 | 48 | 31 |
1930 | 2,460 | 408 | 374 | 344 | 114 | 126 |
1940 | 3,707 | 12 | 508 | 599 | 170 | 280 |
1950 | 5,407 | 198 | 1,498 | 729 | 133 | 1,755 |
1960 | 7,055 | 271 | 2,854 | 2,957 | 419 | 5,255 |
1970 | 9,637 | 487 | 3,708 | 6,985 | 854 | 13,957 |
Unlike OPEC, the market for crude oil in the U.S. was generally competitive since mineral rights did not belong to the state but the landholder. If it was under your property, you owned it. Since oil (and gas) reservoirs often crossed property boundaries, "rights" were hard to specify and enforce. If an oil reservoir was split between my property and yours, the money would flow to whomever could suck all the oil out first. Overdrilling was the result, as was the birth of "wildcatters," individuals who would dig oil wells on unproven land, hoping to strike it rich.
During the period up to 1970 (and even beyond), the "market" for crude oil was characterized largely by within-company exchanges. Most oil companies were "vertically integrated," meaning that the company operated all the way down the value chain - crude oil would go from the field to the refiner to the marketer (and then to the retailer, like a gas station) while staying within company borders. There were a small number of market transactions at what were referred to as "posted prices." Posted prices are essentially fixed offer prices posted by companies in advance of transactions. Posted prices were originally painted on wooden signs and hung on posts (hence the name), each remaining in effect until it was replaced by a new one. Now, posted prices take the form of electronic bulletins issued by major oil producers. (The spot market for crude oil is still not very large, so posted prices are not always representative of market conditions.) An example of a posted price bulletin is shown in Figure 1.2.
Memorandum - August 23, 2013
Chevron has changed its posted prices as shown below:
Effective 7:00 a.m. on the date(s) indicated, and subject to change without notice and subject to the terms and conditions of its division orders or other contracts, Chevron will pay the following prices ($/bbl) for oil at 40.0° API and above, except as noted below, delivered into pipelines for its account into the custody of its authorized carrier or receiving agent. The posted prices for crude oil transported by truck or barge will be the following prices less appropriate trucking or barge cost. The posted prices for crude oil received off shore will be the following prices less appropriate transportation and quality costs to get the crude on shore. For information on Chevron's most recent prices, call Cliff Williams +1 832 544 5822.
For fax receipt problems call +1 832-854-2929.
For a voice recorded message of the posted prices each day, please call 832 854 3033.
Note: Effective January 1, 2008 Chevron will begin posting for Rocky Mountain Condensate.
1 Prices are subject to a Gravity Adjustment Scale.
East of the Rockies | Price | Change | Gvty Adj1 |
---|---|---|---|
Altamont Yellow Wax | 90.92 | 1.39 | --- |
Rocky Mountain Condensate | 95.42 | 1.39 | --- |
Southwest Wyoming Sweet | 98.92 | 1.39 | --- |
Uinta Basin Black Wax | 98.92 | 1.39 | --- |
Western Colorado | 96.42 | 1.39 | --- |
Note: Prices in $/bbl
The prices are based upon computation of quantities by mutually acceptable automatic measuring equipment in accordance with provisions of API Standard 1101 or from 100% tank tables with corrections of volume and gravity for temperature to 60°F in accordance with ASTM D-1250 and supplements thereto, if any, and with deductions for full sediment, water and other impurities. Only merchantable oil acceptable to the receiving agency designated by Chevron and containing not more than 1% sediment and water will be accepted.
OPEC was formed in 1960, largely as a way for governments of oil-producing nations to capture oil revenues that, at the time, were going to foreign producing firms. The motivation for founding OPEC was not market power, but rather a tax dispute. Oil was originally taxed as income. Thus, as prices fell with increased competition, the tax collected by the oil-producing nations also fell. The members of OPEC decreed that oil taxes would now come in the form of excise taxes, levied on a per-barrel basis. In other words, they succeeded in separating tax revenue from the value of the taxable commodity. The rise in oil prices around this time (see Figure 1.1) reflects the fact that spot prices reflected taxes collected by OPEC governments, rather than market transactions, since excise taxes act as a floor on prices. As a cartel attempting to coordinate actions among its members, OPEC has had only mixed success as we will discuss below. Two incidents, one in 1973 and one in 1979, however, did impact the world oil market substantially, as shown in the middle section of Figure 1.1, and cemented OPEC's reputation into place for the following decades.
In 1971 and 1972, fears began to grow in the developed world that if we were not already running out of energy supplies, we would soon as additional nations adopted western industrial structures. Then-president Richard Nixon appeared particularly concerned that Arab nations might impose a selective embargo on the United States for its pro-Israel policy. Such a selective embargo could not have worked; the world crude-oil market was too large, and replacement oil could have been found in too many places. The energy crisis was largely hysteria - production was increasing with no end in sight, and imports, particularly from Saudi Arabia, were rising.
It is important to separate the energy crisis from the Arab oil embargo of 1973. The two are separate but related events. The Arab oil embargo was successful only because of the price controls and rationing that occurred as a result of the energy crisis. It is possible (but perhaps a stretch) that the high prices and lines at the gas pumps may have happened even without cutbacks in supply from the Arab oil producers. The oil embargo officially started in October 1973, when a group of Middle East countries announced a 5% production cut per month as a reaction to the Yom Kippur war between Egypt and Israel (or, more precisely, Israel's victory in that war, aided by nations such as the United States). The embargoing nations said that the cuts would be restored once Israel withdrew from Palestine and Jerusalem.
The oil embargo of 1979 was not really much of an embargo at all, at least not in the sense of the 1973 embargo. The output cuts in 1979, however, were much larger and the overall effect more lasting than its predecessor six years earlier. The primary player in the 1979 embargo was Saudi Arabia, which cut production following a strike by Iranian oil workers. The production cuts were an attempt to raise prices. This they did, but they also reawakened fears of an energy crisis, with politicians muttering I-told-you-sos about how the world was in for a severe energy shortage. The cutbacks by Saudi Arabia only lasted three months, but the damage was done, and Saudi Arabia was recognized as the only single player that had the capability to move the world oil market. The following year, the Saudis found themselves in the enviable position of being able to raise prices without lowering output.
The current market for crude oil is truly global in reach. Oil cargoes move with relative ease between countries and across oceans. While most U.S. oil imports come from a relatively small group of countries, it is misleading to think that only those countries have an impact on oil prices in the United States. Because oil can and does move so freely from one area to another across the globe, it is better to think of the oil market as a global pool, rather than as a network of suppliers and buyers. If one supplier shrinks the overall depth of the pool by withholding supply (or floods the pool by producing a lot of oil), then the impact will be felt uniformly throughout the pool.
At this point you should listen to episodes 1, 2 and 3 of the Planet Money Buys Oil [1] podcast. This podcast is very entertaining and will give you a sense of what the "physical" market for crude oil is like. The physical market is what we've been talking about so far in the lesson - the part of the oil market where buyers and sellers exchange money for crude oil. In the next part of the lesson we'll move into the "futures market" for crude oil, which is where all sorts of different market players hedge and speculate on the physical price of crude oil. Episode 3 gets a little beyond crude oil into the refining area, which is the topic of Lesson 2. But it's still worth listening to at this point.
Pricing of oil is determined largely by a mix of supply factors, demand factors, and panic. How much of any given oil-price movement is due to each of these three factors is an eternal mystery that keeps a small army of editorial columnists and television talking-heads in business. The supply-demand balance is perhaps the easiest piece to explain - when demand is high (for example, during the wintertime when heating oil demands are high or during the summer when people tend to drive more often and further distances), consumers are willing to pay more for refined petroleum products, and higher-cost oil supplies must be brought online. Thus, the price goes up. Similarly, when accidents, political strife, or war keep supplies offline, higher-cost replacements must be found, and the price goes up. Panic in the oil market is not always rational but does happen. Its roots can be traced back to OPEC's successes in the 1970s of increasing world oil prices, even for brief periods. Believing that OPEC had the power to do pretty much whatever it wanted, market participants began engaging in a series of self-fulfilling prophecy games. They worked something like this. First, one or more market participants would believe that OPEC would act to increase prices or reduce supply. Afraid of getting caught short or unable to fulfill contracts, stockpiling commenced, pushing up spot prices. Thus, all OPEC needed to do was cause panic in the markets by spreading rumors of policy changes. The gains were nearly always short lived as the high cost of inventories would result in sell-offs, bringing oil prices down to prepanic levels. Nowadays, broader geopolitical concerns, particularly in the Middle East and Africa, have replaced the grumblings of OPEC as the source for panic-induced spikes in oil prices.
OPEC was mentioned earlier as an entity that has been able to exert substantial influence on global markets for crude oil. OPEC operates as a cartel - a group of producing countries that attempt to coordinate supply decisions in order to exert some influence on prices. OPEC does not try to set prices directly, is as often believed. What OPEC countries try to do is to expand or contract oil production in order to keep the world price within some band that the countries collectively deem desirable.
OPEC's actual ability to manipulate oil prices is not all that clear, and its influence has dwindled as more "unconventional" petroleum resources have been developed, including the oil sands in Canada and shale oil in the United States. Most cartels are difficult to sustain, since each member of the cartel has an incentive to cheat - in OPEC's case, this means that countries have often produced more oil than they were supposed to under the quota system, as shown in Figure 1.3 (the most consistent cheater seems to have been the country of Algeria). Even during the 1973 embargo, none of the OPEC nations approached the formal 5% production cut mentioned in the embargo. Saudi Arabia's production decreased by 0.8%. Iraq and Oman saw the biggest percentage cuts in production at 1%. Prices did indeed go up, but largely as a result of fear and higher taxes rather than actual supply shortages. The actual production cuts lasted only three weeks; the embargo fell apart in December when Saudi Arabia raised production.
NOTES:
Algeria | Indonesia | Iran | Libya | Nigeria | Qatar | Saudi Arabia | UAE | Venezuela | |
---|---|---|---|---|---|---|---|---|---|
1982 Jan | 53 | -3 | 110 | 67 | -8 | 0 | -21 | 18 | 27 |
1982 Mar | 49 | 0 | 81 | 50 | -5 | 6 | -15 | 15 | 22 |
1982 Jun | 46 | 3 | 52 | 34 | -2 | 11 | -10 | 12 | 17 |
1982 Sep | 42 | 6 | 24 | 18 | 1 | 16 | -4 | 9 | 12 |
1983 Jan | 39 | 10 | -5 | 2 | 4 | 21 | 1 | 6 | 7 |
1983 Mar | 41 | 10 | -5 | 2 | 6 | 19 | 0 | 10 | 8 |
1983 Jun | 43 | 11 | -5 | 3 | 8 | 17 | -2 | 15 | 8 |
1983 Sep | 45 | 12 | -5 | 4 | 10 | 15 | -4 | 19 | 9 |
1984 Jan | 46 | 13 | -5 | 4 | 12 | 14 | -5 | 23 | 10 |
1984 Mar | 48 | 13 | -5 | 5 | 14 | 12 | -7 | 28 | 10 |
1984 Jun | 50 | 14 | -5 | 5 | 15 | 10 | -8 | 32 | 11 |
1984 Sep | 50 | 14 | -5 | 5 | 15 | 10 | -8 | 32 | 11 |
1985 Jan | 50 | 14 | -6 | 5 | 15 | 9 | -8 | 32 | 11 |
1985 Mar | 49 | 14 | -6 | 5 | 15 | 8 | -8 | 32 | 11 |
1985 Jun | 49 | 14 | -6 | 5 | 15 | 7 | -7 | 32 | 11 |
1985 Sep | 49 | 14 | -6 | 5 | 15 | 7 | -7 | 32 | 10 |
1986 Jan | 49 | 14 | -7 | 5 | 15 | 6 | -7 | 32 | 10 |
1986 Mar | 48 | 14 | -7 | 5 | 15 | 5 | -7 | 32 | 10 |
1986 Jun | 48 | 13 | -7 | 5 | 15 | 4 | -6 | 32 | 10 |
1986 Sep | 59 | 16 | 6 | -4 | 4 | 2 | -1 | 35 | 16 |
1987 Jan | 61 | 16 | 0 | -1 | 6 | -25 | 2 | 62 | 15 |
1987 Mar | 63 | 16 | -6 | 3 | 8 | -53 | 4 | 88 | 13 |
1987 Jun | 59 | 14 | -5 | 10 | 10 | -18 | 9 | 75 | 16 |
1987 Sep | 56 | 13 | -5 | 18 | 12 | 17 | 13 | 62 | 19 |
1988 Jan | 56 | 13 | -2 | 16 | 13 | 16 | 13 | 63 | 18 |
1988 Mar | 56 | 13 | 0 | 13 | 15 | 15 | 13 | 65 | 18 |
1988 Jun | 56 | 13 | 3 | 11 | 17 | 14 | 13 | 67 | 16 |
1988 Sep | 56 | 13 | 6 | 9 | 19 | 13 | 13 | 69 | 16 |
1989 Jan | 53 | 10 | -47 | 7 | 23 | 16 | 8 | 74 | 13 |
1989 Mar | 50 | 7 | -99 | 6 | 27 | 19 | 3 | 78 | 10 |
1989 Jun | 43 | 4 | -3 | 3 | 20 | 12 | 8 | 95 | 8 |
1989 Sep | 40 | 1 | -5 | 4 | 8 | 7 | 5 | 90 | 4 |
1990 Jan | 43 | 8 | -1 | 13 | 13 | 8 | 22 | 70 | 11 |
1990 Mar | 46 | 14 | 3 | 21 | 18 | 9 | 38 | 50 | 17 |
1990 Jun | 47 | 13 | 3 | 15 | 13 | 7 | 25 | 34 | 13 |
1990 Sep | 48 | 12 | 3 | 9 | 8 | 5 | 12 | 18 | 9 |
1991 Jan | 49 | 12 | 3 | 3 | 3 | 3 | -2 | 2 | 5 |
1991 Mar | 53 | 12 | 4 | 2 | 6 | 1 | 0 | 3 | 6 |
1991 Jun | 56 | 13 | 5 | 1 | 9 | 0 | 1 | 4 | 6 |
1991 Sep | 60 | 14 | 6 | 6 | 12 | -1 | 3 | 4 | 6 |
1992 Jan | 60 | 14 | 6 | 5 | 12 | 4 | 3 | 4 | 6 |
1992 Mar | 59 | 14 | 6 | 4 | 13 | 9 | 2 | 3 | 5 |
1992 Jun | 59 | 14 | 6 | 4 | 13 | 13 | 2 | 2 | 4 |
1992 Sep | 58 | 13 | 6 | 3 | 14 | 18 | 1 | 1 | 4 |
1993 Jan | 63 | 16 | 9 | 0 | 12 | 14 | 3 | 2 | 9 |
1993 Mar | 61 | 15 | 6 | 0 | 9 | 15 | 3 | 2 | 11 |
1993 Jun | 60 | 13 | 3 | -1 | 6 | 16 | 3 | 3 | 13 |
1993 Sep | 60 | 14 | 3 | 0 | 7 | 20 | 3 | 3 | 15 |
1994 Jan | 61 | 14 | 3 | 0 | 8 | 23 | 3 | 3 | 17 |
1994 Mar | 62 | 14 | 3 | 0 | 9 | 27 | 3 | 4 | 19 |
1994 Jun | 63 | 14 | 3 | 1 | 11 | 31 | 3 | 4 | 21 |
1994 Sep | 64 | 14 | 3 | 1 | 12 | 35 | 4 | 5 | 23 |
1995 Jan | 65 | 15 | 3 | 1 | 13 | 38 | 4 | 5 | 25 |
1995 Mar | 66 | 15 | 3 | 1 | 14 | 42 | 4 | 5 | 27 |
1995 Jun | 67 | 15 | 3 | 2 | 16 | 46 | 4 | 6 | 29 |
1995 Sep | 67 | 15 | 3 | 2 | 17 | 50 | 5 | 6 | 31 |
1996 Jan | 68 | 15 | 3 | 2 | 18 | 53 | 5 | 7 | 33 |
1996 Mar | 69 | 15 | 3 | 3 | 19 | 57 | 5 | 7 | 35 |
1996 Jun | 65 | 14 | 1 | 2 | 18 | 60 | 5 | 6 | 35 |
1996 Sep | 60 | 12 | -1 | 1 | 17 | 62 | 4 | 5 | 35 |
1997 Jan | 56 | 10 | -3 | 0 | 16 | 65 | 4 | 4 | 34 |
1997 Mar | 51 | 9 | -4 | -1 | 15 | 68 | 4 | 3 | 34 |
1997 Jun | 47 | 7 | -6 | -2 | 13 | 70 | 4 | 2 | 34 |
1997 Sep | 42 | 5 | -8 | -3 | 12 | 73 | 3 | 1 | 33 |
1998 Jan | 55 | 19 | 10 | 4 | 1 | 2 | 2 | 7 | 6 |
1998 Mar | 46 | 10 | 1 | -4 | 16 | 86 | 1 | 8 | 38 |
1998 Jun | 52 | 14 | 3 | 0 | 15 | 62 | 2 | 8 | 26 |
1998 Sep | 57 | 18 | 6 | 4 | 13 | 37 | 3 | 8 | 14 |
1999 Jan | 63 | 22 | 8 | 8 | 12 | 13 | 4 | 8 | 3 |
1999 Mar | 61 | 19 | 8 | 7 | 36 | 13 | 4 | 9 | 2 |
1999 Jun | 60 | 17 | 9 | 7 | 59 | 13 | 3 | 9 | 1 |
1999 Sep | 59 | 14 | 9 | 7 | 82 | 12 | 2 | 10 | 2 |
2000 Jan | 57 | 12 | 9 | 6 | 106 | 12 | 1 | 10 | 1 |
2000 Mar | 56 | 10 | -63 | 5 | 96 | 13 | 3 | 6 | 3 |
2000 Jun | 50 | 3 | -3 | 1 | 3 | 10 | 3 | 4 | -3 |
2000 Sep | 55 | 9 | 3 | 4 | 10 | 15 | 4 | 9 | 6 |
2001 Jan | 63 | 9 | 0 | 6 | 10 | 16 | 4 | 10 | 4 |
2001 Mar | 65 | 10 | 3 | 6 | 16 | 15 | 4 | 8 | 4 |
2001 Jun | 67 | 12 | 5 | 6 | 23 | 15 | 4 | 6 | 4 |
2001 Sep | 86 | 13 | 8 | 13 | 21 | 21 | 8 | 10 | 5 |
2002 Jan | 76 | 7 | -10 | 13 | 19 | 21 | 10 | 8 | -15 |
2002 Mar | 67 | 2 | -28 | 13 | 17 | 21 | 12 | 7 | -36 |
2002 Jun | 57 | -4 | -46 | 12 | 15 | 22 | 13 | 6 | -56 |
2002 Sep | 47 | -9 | -62 | 12 | 13 | 22 | 15 | 4 | -76 |
2003 Jan | 38 | -16 | 1 | 7 | -3 | 17 | 15 | 4 | -33 |
While OPEC has historically been viewed as a cartel that keeps oil prices high, its members have, more recently, probably been at least partially responsible for the rapid decline in oil prices. The Economist has a nice and recent article [3] describing the factors that have been contributing to the slide in oil prices. This has been partly due to sluggish economies in developing countries, energy efficiency in rich countries, the boom in shale-oil production in the United States (which we will come back to in a few weeks) and a strategic decision by Saudi Arabia to maintain high oil production levels even in the face of low prices (this is perhaps an attempt to inflict economic pain on the shale-oil business in the U.S.).
While the market for oil is global in reach, trade has clustered itself into several primary regions. This has happened despite shipping costs that are generally low (only a few dollars per barrel) and the ease with which oil cargoes can be directed and redirected towards the highest-priced buyers (in financial terms, oil is "fungible"). Nevertheless, prices in these regions tend to move in tandem. In the homework assignment for this week, you will get some hands-on experience using prices for some of these regional crude oils to investigate how world oil prices move relative to one another.
One reason for regional pricing of crude oil is that it is a heterogeneous commodity - not all crude oils are alike. Some oil can be extracted at a cost of a few dollars per barrel, and flows like water (it would look like Coca-Cola coming out of the ground). Other oil requires sophisticated equipment, techniques and processing to extract, and is thick as tar, requiring special methods to transport it to the refinery (and to refine into saleable petroleum products). In general, oil with a low viscosity is referred to as "light," while thicker, higher-viscosity crude oils are referred to as "heavy." Light oils are generally valued higher than heavy oils. The viscosity of crude oil is measured on a scale known as the API gravity (API stands for "American Petroleum Institute"). The API gravity scale measures how heavy or light a crude oil is, relative to water (thus the terms heavy and light oil). The API gravity of a crude oil is measured by taking its specific gravity (density relative to water), and calculating:
API Gravity = (141.5 ÷ Specific Gravity) - 131.5.
Sulfur content is another important determinant of value; the lower the sulfur content the better. So-called "sweet" oils are low in sulfur, while "sour" oils have a higher sulfur content. There are some differences in crude oil quality among the major trading regions. Pricing of heterogeneous commodities often involves establishing a benchmark or "marker" price that is used to track general price movements. Pricing in any particular transaction is based on the marker price, with adjustments for location and quality. Below are some brief descriptions of some of the major global benchmark oil streams, and the Energy Information Administration has a very nice article [4]on the most important global marker prices. The first two, West Texas Intermediate (WTI) and Brent, are the most important benchmark oil prices in the world.
Earlier, I mentioned that when demand increases, higher-cost supplies must be brought online to meet that higher demand. Prices for oil have certainly been on a roller-coaster ride over the past few years, going from more than $100 per barrel a few years ago to less than $30 per barrel at the time that you are reading this lesson. Does this mean that a few years ago, we thought all of the cheap oil in the world was gone, but we have now discovered new supplies of cheap oil? And if not, then what explains the price movements that we have seen in the oil market in recent years?
The answer depends on some understanding of the cost of supplying crude oil. Figure 1.4 provides a rough idea of the cost of extracting different types of oil resources. The low-cost resources are conventional oil fields that have been operating for decades. The higher-cost resources are so-called "unconventional" sources of oil, including deepwater or Arctic drilling; the oil sands of Alberta, Canada; and extraction of oil from shale formations (one of the best-known examples is the Bakken shale in North Dakota, whose extraction costs are somewhere in the lower end of the range shown - perhaps around $50 to $60 per barrel). If the producers of conventional oil were to flood the market, then the price would drop so low that unconventional players would be forced to shut down. This would be good for consumers right now, but bad for the producers of conventional oil (and eventually for consumers), since there would be less oil to sell later on. Thus, conventional oil producers hold some output back, leaving the unconventional producers to serve the leftover or "marginal" demand. This is good for conventional oil producers in both the short and long term (because they earn larger profits), but is bad for consumers in the short term. (In the long term, this strategy keeps prices from rising to even higher levels in the future.)
Energy Source | Potential Production (billion barrels) | Production Cost ($ per barrel) |
---|---|---|
Conventional Oil | 2000 | 10-15 |
EOR | 2000-3000 | 15-20 |
Tar Sands/Heavy Oil | 3000-5000 | 20-22 |
Gas to liquid synfuel | 5000-8000 | 20-30 |
Coal to liquid synfuel | 8000-16000 | 32-34 |
Oil Shales | 16000-18000 | 30-90 |
Part of the reason that crude-oil prices have been so high for so long is the increased role that unconventional oil is playing in world oil supply. This is due in some part to the natural decline in output that is expected from conventional oil fields as they mature (more on this later when we talk about "peak oil"). The growth in unconventional oil supplies has been so rapid that countries with large reserves of unconventional oil, such as the United States, have become large oil producers in a very short period of time, as shown in Table 1.2. It may be worth noting that the world's largest oil reserves are estimated to be held by Venezuela, which is the 11th largest oil producer in the world. There are lots of political reasons that Venezuela is not a larger oil producer, but one economic reason is that lots of Venezuela's oil assets are unconventional and heavy. This means that they take time to develop, and the oil they produce has less value on the world market (i.e., it is costlier to ship and refine).
2005 Top Ten Oil Producers |
1,000 BBL/Day |
2013 Top Ten Oil Producers |
1,000 BBL/Day |
---|---|---|---|
1. Saudi Arabia | 11,096 | 1. Saudi Arabia | 11,726 |
2. Russia | 9,511 | 2. Russia | 10,788 |
3. United States | 8,329 | 3. United States | 10,003 |
4. Iran | 4,239 | 4. China | 4,180 |
5. China | 3,809 | 5. Canada | 3,948 |
6. Mexico | 3,784 | 6. Iran | 3,558 |
7. Canada | 3,096 | 7. United Arab Emirates | 3,646 |
8. Norway | 2,978 | 8. Iraq | 3,141 |
9. Venezuela | 2,867 | 9. Kuwait | 3,126 |
10. United Arab Emirates | 2,845 | 10. Mexico | 2,875 |
Not much has been made here of long-term contracts for oil. This is because up until the early 1980s there were not very many. Production continued as long as the extracted oil could find a home in the spot market. The oil-producing countries, recognizing their market power, either implicitly or explicitly avoided long-term contracts in pursuit of volatile and largely lucrative spot prices.
Following the oil crisis of 1979, the market seemed to have had enough of OPEC's games. Non-OPEC crude production increased as new fields were opened up in Alaska and the U.K., where there was a healthy forward market for Brent crude oil. The New York Mercantile Exchange (NYMEX) officially opened its doors to oil trading in 1983, although the exchange had started selling product futures some years before. Forward markets arose because while the spot market was good for the OPEC governments, it was lousy for oil companies, refiners and other government buyers. Although the number of these buyers was still relatively small, they needed a way to hedge the spot market, and traders were the means to this end.
We have already explained the difference between spot markets and forward markets. Both are "physical" markets, in that their main purpose is to exchange commodities between willing buyers and sellers. A spot or forward transaction typically involves the exchange of money for a physical commodity. Within the past few decades, entities with sufficiently large exposure in the physical market (i.e., the need to buy or sell lots of physical barrels of crude oil) have developed financial instruments that can help them "hedge" or control price volatility. By far the most important of these financial instruments is the "futures" contract.
Futures contracts differ from forward contracts in three important ways. First, futures contracts are highly standardized and non-customizable. The NYMEX futures contract is very tightly defined, in terms of the quantity and quality of oil that makes up a single contract, the delivery location and the prescribed date of delivery. Forward contracts are crafted between a willing buyer and seller and can include whatever terms are mutually agreeable. Second, futures contracts are traded through financial exchanges instead of in one-on-one or "bilateral" transactions. A futures contract for crude oil can be purchased on the NYMEX exchange and nowhere else. Third, futures contracts are typically "financial" in that the contract is settled in cash instead of through delivery of the commodity.
A simple example will illustrate the difference. Suppose I sign a forward contract for crude oil with my neighbor, where she agrees to deliver 100 barrels of crude oil to me in one month, for \$50 per barrel. A month from now arrives and my neighbor parks a big truck in my front yard, unloads the barrels, and collects $5000 from me. This is a "physical" transaction. If I were to sign a futures contract with my neighbor, then in one month instead of dropping off 100 barrels of crude oil in front of my house, she pays me the value of that crude oil according to the contract (this is called "settlement" rather than "delivery"). So, I would receive $5000. I could then go and buy 100 barrels of crude oil on the spot market. If the price of crude oil on the spot market was less than $50 per barrel, then in the end I would have made money. If it was more than $50 per barrel, then in the end I would have lost money, but not as much as if I had not signed the futures contract.
This simple example illustrates the primary usefulness of futures contracts, which is hedging against future fluctuations in the spot price. A hedge can be thought of as an insurance policy that partially protects against large swings in the crude oil price.
NYMEX (now called the "CME Group") provides a platform for buying and selling crude oil contracts from one month in advance up to eight and a half years forward. The time series of futures prices on a given date is called the "forward curve," and represents the best expectations of the market (on a specific date) as to where the market will go. Figure 1.5 shows an example of the "forward curve" for crude oil traded on the NYMEX platform. (Note that the data in this figure is dated, and is just for illustration - you'll get your hands dirty with more recent data soon enough.) The value of these expectations, naturally, depends on the number of market participants or "liquidity." For example, on a typical day there are many thousands of crude oil futures contracts traded for delivery or settlement one month in advance. On the other hand, there may be only a few (if any) futures contracts traded for delivery or settlement eight years in advance.
The NYMEX crude oil futures contract involves the buying and selling of oil at a specific location in the North American oil pipeline network. This location, in Cushing Oklahoma, was chosen very specifically because of the amount of oil storage capacity located there and the interconnections to pipelines serving virtually all of the United States. You can read more about the Cushing oil hub in the Lesson 1 Module on Canvas, but it is one of the most important locations in the global energy market.
There is a wealth of data out there on crude oil markets. The following two videos will help orient you towards two websites where you can find information on crude oil prices, demand, shipments and other data. The first covers the NYMEX crude oil futures contract and the second covers the section on crude oil from the U.S. Energy Information Administration. Although these are U.S. websites, there is a good amount of data on global markets.
Many of the world's most active energy commodity markets are run by an organization called the CME Group. And in this video, I'm gonna show you the CME group's website and show you how to find information on energy futures markets through the CME Group. And the specific example we're gonna use in this video is how to find information about the futures market for West Texas Intermediate Crude Oil, which is the benchmark crude oil futures market. So we're gonna go to the CME Group website, CME Group.com. And then we're gonna click on markets. And then in the section here where it says "browse by," we're going to click on energy. And then once we come to the energy futures and options page, we're going to click on "Products." And this will take us further down the page to where we can find information on a number of different energy futures markets. The one we're most interested in right now is crude oil futures. What we're going to click on that. And then it may take a second for the page to load. But what is going to pop up is a whole bunch of information about the futures market for West Texas Intermediate Crude Oil. And so I'll just say a little bit about what these different columns mean. The Month column is the month of contract delivery, not the month that the contract was treated. So September 2020, for example, that indicates the contract for deliveries of West Texas Intermediate crude oil in September 2020. The last column, the column that says "Last" here, that shows the last recorded offer for West Texas Intermediate Crude Oil in that month. It takes units of dollars per barrel. So here, the September 2020 contract, the last offer that was received is $42.78 per barrel. And the "Change" column is just the change, again, in dollars per barrel, from teh previous author. The "Prior settle" column is one that we'll use a lot. And that represents the last recorded price at which a buyer and a seller were matched at which to trade happened. And so here the prior subtle column for the September 2020 contract is $42.89 per barrel. The "Open" and "High" and "Low" columns, those show the price for the West Texas Intermediate Crude Oil Futures contract at the start of the trading day. And then the highest price and the lowest price where the crude oil contract traded on each day. Then the volume just shows you the number of contracts that have traded. And these orange flashes you see will happen because this data is effectively updated in real time. And so when there are additional contracts traded or when the price changes, those will show up in real time on the screen. So this video has shown you how to get to the CME group's website, where to find information on energy contracts, and specifically, how to get data on the West Texas Intermediate Crude Oil futures contract.
One of the best sources for energy information is the U.S. energy information administration or EIA, but navigating the page can be a little bit tricky and you kind of have to know exactly what you're looking for.
So, here what I'm going to do is introduce you to the EIA webpage in general and point out where to find information about crude oil and refined petroleum products. So, the EIA homepage is www.eia.gov [6], and when you go there you'll see several things. One is that they have a sort of rotating list of stories some of which you may find interesting. They have some specific data highlights for electricity or natural gas or oil, but if you're looking for something specific, generally the best place to start is this button at the top that says sources and uses. Once you click on that, you'll see basically all the information on the EIA website that is organized by energy source.
So, you have petroleum, coal, natural gas, electricity, and some others. So, for this I'm going to click on the petroleum and other liquids button and that will take you to the petroleum home page at the EIA where again they have some sort of rotating cover stories, and they also have some brief articles that that may be of interest and links to some of the EIA regular publications that are relevant to crude oil and refined petroleum products. If you really know what you're looking for, once you get to the petroleum home page the best place to go is this tab that says data. So, here, I'll click on that and from the data tab you can see the data that the EIA has on oil and petroleum products organized into several different categories. So, high level data, so things like national crude oil prices, national level crude oil production supply, are in this summary tab prices for crude oil and petroleum products are in the prices tab there are a lot of these.
You have reserves information on refining imports and exports and shipments between different parts of the United States and then a consumption data is a little, you have to look at this with a little bit of care, because when somebody sells oil or petroleum product to somebody else the seller doesn't actually know what the buyer is doing with it and so there's a difference between sales and consumption. So for example, when I put 10 gallons of gasoline in my tank, right, then what I'm basically doing is I'm buying gasoline, but I'm not using it at the same second that I bought it, and they use those 10 gallons within a week or if I don't drive very much it may take me months to use those ten gallons of gasoline. So, the EIA is very careful to talk about sales, so these would be sales on the wholesale or retail level. So, for example, sales of fuel oil and kerosene or terms that they call disposition or product supplied and what they mean by disposition and product supplied is simply the movement of crude oil from petroleum products from us for example, storage facilities, to retailing facilities or are retailing facilities to end use consumers.
If you look at Figure 1.5, you might notice that the price of crude oil generally declines as you move farther out into the future. This is called "backwardation." The opposite, in which the price of crude oil increases as you move farther out into the future, is called "contango." In general, we expect the crude oil market to be in backwardation most of the time; that is, we expect the future price to be lower than the current (spot) price. This can be true even if we expect demand for crude oil to increase in the future. Why would this be the case? If demand is expected to rise in the future, shouldn't that bring the market into contango?
Sometimes this does happen. Most often, the crude oil market is in backwardation because storing crude oil generally involves low costs and has some inherent value. Suppose you had a barrel of crude oil. You could sell it now, or store it to sell later (maybe the price will be higher). The benefit to storing the barrel of oil is the option to sell it at some future date, or keep on holding on to the barrel. This benefit is known as the "convenience yield." Now, suppose that hundreds of thousands of people were storing barrels of oil to sell one year from now. When one year comes, all those barrels of oil will flood the market, lowering the price (a barrel in the future is also worth less than a barrel today; a process called discounting that we will discuss in a later lesson). Thus, because inventories of crude oil are high, the market expects the price to fall in the future.
The ability to store oil implies that future events can impact spot prices. A known future supply disruption (such as the shuttering of an oil refinery for maintenance) will certainly impact the futures price for oil, but should also impact the spot price as inventories are built up or drawn down ahead of the refinery outage.
We'll close this discussion with a very interesting and entertaining thing that happened in the crude oil market in April 2020, when the price of oil on the futures market went negative, trading at -$37.63 per barrel. This means that if you were a potential buyer of crude oil, someone would have paid you $37.63 for every barrel of oil you agreed to buy. I don't know about you, but no one has ever paid me to fill my car's gas tank. It usually works the other way around. What on earth happened here? This short blog post from RBN Energy [7] has a good explanation, and you can also watch RBN Energy's interview with Larry Cramer [8]. The reason for this price craziness has to do a little bit with panic in the oil market because of the coronavirus pandemic and a little bit with how futures markets work. Basically, what happened was that there were a bunch of crude oil traders who had contracts to buy crude oil for delivery in May 2020. Those traders either had to take physical delivery of a bunch of barrels of crude oil in May, or find someone else to assume their contract (this is called "closing one's position" in commodity market parlance). Well, since the pandemic had hit and crude oil demand had collapsed, there was no one in the market who really wanted to buy crude oil off of these traders. And, there was no place for these traders to physically put their crude oil that they were obligated to take in May 2020. So these traders got caught in a market squeeze and had to pay others to close their positions. It's crazy, and hasn't happened in the crude oil market before...but it makes perfect sense when you realize how this market actually works!
Our transportation systems are highly dependent on petroleum. High and volatile prices for oil (and at the pump) naturally give rise to suggestions that oil production has peaked, or that we are running out of oil. Concern over oil supplies is not new. Looking back over the past hundred years [9], there have been five or six projections from governments, researchers and other groups suggesting that the world was about to run out of oil, precious metals or other valuable commodities.
In the 1950s, a geologist named M. King Hubbert looked at oil production data from all of the major oil-producing countries in the world (at that time). Based on his statistical analysis of the data, he projected that U.S. oil production would peak in the 1970s and that world oil production would peak during the first decade of the 21st century. These projections came to be known as "Hubbert's Peak." And it turns out that Hubbert's projections were highly accurate - U.S. oil production did peak in the 1970s and the collection of oil-producing countries that Hubbert originally studied did see their collective oil production peak in the early 2000s. So, maybe Hubbert had a point, and maybe there is something to the "peak oil" paranoia.
The reality of the "amount of oil" is more complex. When Hubbert made his predictions in the 1950s, the oil industry was still in its technical infancy. Most oil production came from so-called "elephant" oil fields, tremendously large reservoirs of easily-accessible oil. To imagine what these "elephant" fields were like, think about the theme song to the Beverly Hillbillies [10], when Jed Clampett shoots a hole in the ground and oil comes spouting up. The elephant oil fields that represented most oil production during Hubbert's time were basically like "Jed Clampett oil." What Hubbert was predicting was really the decline in Jed Clampett oil.
While Hubbert was right about Jed Clampett oil, his analysis did not consider the advances in technology that would make extraction of oil possible from less-accessible reservoirs. Nor did he consider that a rise in the price of oil would make oil extraction from so-called "unconventional" reservoirs profitable enough to undertake. Deepwater drilling (like in the Gulf of Mexico), the Canadian oil sands, and even extraction of oil from shales via hydraulic fracturing in North Dakota are all examples of unconventional oil production.
In fact, most geologists now believe that the amount of unconventional oil is much larger than all of the Jed Clampett oil fields put together. A recent study from University of California at Berkeley ("Risks of the Oil Transition" by Adam Brandt and Alex Farrell, Environmental Research Letters 2006) estimated that the world has used up only about 5% of known technically recoverable oil reserves. A future supply curve for liquid hydrocarbons (crude oil and usable synthetic liquid fuels), as shown in Figure 1.6 and adapted from the Brandt/Farrell paper, demonstrates this shift from "conventional" resources (i.e., Jed Clampett oil) to unconventional resources. These unconventional resources include the oil sands for which Alberta is now well known; bitumen-laden "heavy oils" (for which Venezuela is also known); enhanced oil recovery from conventional wells (EOR); synthetic fuels manufactured using natural gas or coal as a feedstock; and oil shales, which includes both naturally occurring deposits of oil in low-porosity shale formations (typically requiring hydraulic fracturing to extract) and hydrocarbon-rich shales that are used to produce a synthetic crude oil.
Energy Source | Potential Production (billion barrels) | Production Cost ($ per barrel) |
---|---|---|
Conventional Oil | 2000 | 10-15 |
EOR | 2000-3000 | 15-20 |
Tar Sands/Heavy Oil | 3000-5000 | 20-22 |
Gas to liquid synfuel | 5000-8000 | 20-30 |
Coal to liquid synfuel | 8000-16000 | 32-34 |
Oil Shales | 16000-18000 | 30-90 |
The reality is not that we are "running out of oil," but rather that we are transitioning from a period of easily-accessible oil at low prices to an era of increasingly unconventional production, which has higher costs. Companies will not try to develop these unconventional resources unless consumers are willing to pay the price (economic and environmental) or governments heavily subsidize oil production or consumption. So far, the world has found a way to consume plenty of $100 per barrel oil. At some point, unconventional oil exploration will get so expensive that consumers will look to lower-cost alternatives. Oil will price itself out of the market before the world truly runs out. The increasing popularity of hybrid vehicles, electric vehicles, bicycle transportation in urban areas and even natural gas vehicles are examples of such a shift, even if government policies are required to affect the decisions that consumers make.
Sheik Ahmed Zahi Yamani, the longtime Saudi oil minister and a key founder of OPEC, has perhaps summed up the world oil market the most nicely. He said, "The stone age came to an end, not for lack of stones, and the oil age will end, but not for lack of oil."
In this lesson, you learned about the beginnings of the oil industry in Pennsylvania in the 1850s, the driving force for the early oil production, the key players, and the challenges and booms and busts. The emergence, role, and importance of Rockefeller and Standard Oil in the early development of the oil industry and cooperate structure in America were also discussed. You also learned about how one country, one state and one company dominated the early oil industry for many years. Finally, you were introduced to the international competitive oil industries to Standard Oil that emerged in Russia and the Dutch East Indies in the 1860s. The roles of key personalities such as the Nobels, Rothchilds, Samuels, and Kessler in making this competitive environment possible were also presented.
You have reached the end of Lesson 1! Double check the What is Due for Lesson 1? list on the first page of this lesson to make sure you have completed all of the activities listed there before you begin Lesson 2. Note: The Lesson 2 material will open Monday after we finish Lesson 1.
The links below provide an outline of the material for this lesson. Be sure to carefully read through the entire lesson before returning to Canvas to submit your assignments.
Despite all of the attention that it gets, no one actually wants crude oil – what people want are the various products that are derived from crude oil. These products include not only gasoline, heating oil, and so forth (which will be the focus of our discussions) but everyday consumer items like plastics, long underwear, and crayons. In this lesson, we’ll discuss markets for energy commodities that are refined from petroleum. While conditions in the crude oil market are highly influential in determining the prices of these products, in some sense each petroleum product market has a life of its own. Along the way, we will learn more about a system of accounting for crude oil and petroleum products that was developed after World War II and (despite its seemingly archaic nature) is still used today; why we don’t have more oil refineries even if gasoline prices are very high; and the strange connection between prices at the gas pump and the Federal Reserve.
By the end of this lesson, you should be able to:
This lesson will take us one week to complete. Please refer to the Course Calendar for specific due dates. See the specific directions for the assignment below.
If you have any questions, please post them to our Questions about EME 801? discussion forum (not email), located in the Lesson 00 module in Canvas. I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
Once crude oil is extracted from the ground, it must be transported and refined into petroleum products that have any value. Those products must then be transported to end-use consumers or retailers (like gasoline stations or the company that delivers heating oil to your house, if you have an oil furnace). The overall well-to-consumer supply chain for petroleum products is often described as being segmented into three components (shown graphically in Figure 2.1).
Some companies in the petroleum industry have activities that would fall into upstream, midstream and downstream segments. ExxonMobil is one example of such a firm. Others have activities that fall primarily into only one segment. The KinderMorgan pipeline company is an example of a specialized petroleum firm, in this case belonging to the midstream segment. Many regions have local gas station brands that would specialize in the downstream segment of the industry. One of the best-known regional examples is the WaWa chain of gas stations and convenience stores in eastern Pennsylvania, but large grocery stores and retailers like Costco and Wal-Mart are increasingly involved in downstream sales of petroleum products.
Petroleum refineries are large-scale industrial complexes that produce saleable petroleum products from crude oil (and sometimes other feedstocks like biomass). The details of refinery operations differ from location to location, but virtually all refineries share two basic processes for separating crude oil into the various product components. Actual refinery operations are very complicated, but the basic functions of the refinery can be broken down into three categories of chemical processes:
The link below will take you to a 10-minute long video that provides more details on the various refining processes.
PRESENTER: For crude oil to be used effectively by modern industry, it has to be separated into its component parts and have impurities like sulfur removed. The most common method of refining crude is the process of fractional distillation. This involves heating crude oil to about 350 degrees Celsius, to turn it into a mixture of gases. These are piped into a tall cylinder, known as a fractional tower. Inside the tower, the very long carbon chain liquids, such as bitumen and paraffin wax, are piped away to be broken down elsewhere. The hydrocarbon gases rise up inside the tower, passing through a series of horizontal trays and baffles called bubble caps. The temperature at each tray is controlled so as to be at the exact temperature that a particular hydrocarbon will condense into a liquid. The distillation process is based on this fact. Different hydrocarbons condense out of the gas cloud when the temperature drops below their specific boiling point. The higher the gas rises in the tower, the lower the temperature becomes. The precise details are different at every refinery, and depend on the type of crude oil being distilled. But at around 260 degrees, diesel condenses out of the gas. At around 180 degrees, kerosene condenses out. Petrol, or gasoline, condenses out at around 110 degrees, while petroleum gas is drawn off at the top. The distilled liquid from each level contains a mixture of alkanes, alkenes, and aromatic hydrocarbons with similar properties, and requires further refinement and processing to select specific molecules. The quantities of the fractions initially produced in an oil refinery don't match up with what is needed by consumers. There is not much demand for the longer chain, high molecular weight hydrocarbons, but a large demand for those of lower molecular weight-- for example, petrol. A process called cracking is used to produce more of the lower molecular weight hydrocarbons. This process breaks up the longer chains into smaller ones. There are many different industrial versions of cracking, but all rely on heating. When heated, the particles move much more quickly, and their rapid movement causes carbon-carbon bonds to break. The major forms of cracking are thermal cracking, catalytic, or cat cracking, steam cracking, and hydrocracking. Because they differ in reaction conditions, the products of each type of cranking will vary. Most produce a mixture of saturated and unsaturated hydrocarbons. Thermal cracking is the simplest and oldest process. The mixture is heated to around 750 to 900 degrees Celsius, at a pressure of 700 kilopascals That is, around seven times atmospheric pressure. This process produces alkenes, such as ethane and propane, and leaves a heavy residue. The most effective process in creating lighter alkanes is called catalytic cracking. The long carbon bonds are broken by being heated to around 500 degrees Celsius in an oxygen-free environment, in the presence of zeolite. This crystalline substance, made of aluminum, silicon, and oxygen, acts as a catalyst. A catalyst is a substance that speeds up a reaction or allows it to proceed at a lower temperature than would normally be required. During the process, the catalyst, usually in the form of a powder, is treated and reused over and over again. Catalytic cracking is the major source of hydrocarbons, with 5 to 10 carbon atoms in the chain. The molecules most formed are the smaller alkanes used in petrol, such as propane, butane, pentane, hexane, heptane, and octane, the components of liquid petroleum gas. In hydrocracking, crude oil is heated at very high pressure, usually around 5,000 kiloPascals, in the presence of hydrogen, with a metallic catalyst such as platinum, nickel, or palladium. This process tends to produce saturated hydrocarbons, such as shorter carbon chain alkanes, because it adds a hydrogen atom to alkanes and aromatic hydrocarbons. It is a major source of kerosene jet fuel, gasoline components, and LPG. In one method, thermal steam cracking, the hydrocarbon is diluted with steam and then briefly heated in a very hot furnace, around 850 degrees Celsius, without oxygen. The reaction is only allowed to take place very briefly. Light hydrocarbons break down to the lighter alkenes, including ethane, propane, and butane, which are useful for plastics manufacturing . Heavier hydrocarbons break down to some of these, but also give products rich in aromatic hydrocarbons and hydrocarbons suitable for inclusion in petrol or diesel. Higher cracking temperature favors the production of ethene and benzene. In the coking unit, bitumen is heated and broken down into petrol alkanes and diesel fuel, leaving behind coke, a fused combination of carbon and ash. Coke can be used as a smokeless fuel. Reforming involves the breaking of straight chain alkanes into branched alkanes. The branched chain alkanes in the 6 to 10 carbon atom range are preferred as car fuel. These alkanes vaporize easily in the engine's combustion chamber, without forming droplets and are less prone to premature ignition, which affects the engine's operation. Smaller hydrocarbons can also be treated to form longer carbon chain molecules in the refinery. This is done through the process of catalytic reforming, When heat is applied in the presence of a platinum catalyst, short carbon chain hydrocarbons can bind to form aromatics, used in making chemicals. A byproduct of the reaction is hydrogen gas, which can be used for hydrocracking. Hydrocarbons have an important function in modern society, as fuel, as solvents, and as the building blocks of plastics. Crude oil is distilled into its basic components. The longer carbon chain hydrocarbons may be cracked to become more valuable, shorter chain hydrocarbons, and short chain molecules can bind to form useful longer chain molecules. [MUSIC PLAYING]
The first process is known as distillation. In this process, crude oil is heated and fed into a distillation column. A schematic of the distillation column is shown in Figure 2.2. As the temperature of the crude oil in the distillation column rises, the crude oil separates itself into different components, called “fractions.” The fractions are then captured separately. Each fraction corresponds to a different type of petroleum product, depending on the temperature at which that fraction boils off the crude oil mixture.
Boiling Range in degrees Fahrenheit | Products |
---|---|
<85 | Butane and Lighter products |
85-185 | Gasoline blending components |
185-350 | Naptha |
350-450 | Kerosene, jet fuel |
450-650 | Distillate (diesel, heating oil) |
650-1050 | Heavy Gas Oil |
>1050 | Residual fuel oil |
Distillation tower | Process’ | Ending Products |
---|---|---|
Gasoline Vapors/LPG | LPG | |
Naphtha | Reformer | Gasoline |
Kerosene | Jet Fuel | |
Diesel Distillate | Diesel Fuel | |
Medium Weight Gas Oil | Cracking Units, Alkylation Unit | LPG Gasoline |
Heavy Gas Oil | Cracking Units | Motor Gasoline, Jet Fuel, Diesel Fuel |
Residuum | Coker or nothing | Industrial Fuel or Asphalt Base |
The second and third processes are known as cracking and reforming. Figure 2.3 provides a simplified view of how these processes are used on the various fractions produced through distillation. The heaviest fractions, including the gas oils and residual oils, are lower in value than some of the lighter fractions, so refiners go through a process called “cracking” to break apart the molecules in these fractions. This process can produce some higher-value products from heavier fractions. Cracking is most often utilized to produce gasoline and jet fuel from heavy gas oils. Reforming is typically utilized on lower-value light fractions, again to produce more gasoline. The reforming process involves inducing chemical reactions under pressure to change the composition of the hydrocarbon chain.
The production of final petroleum products differs from refinery to refinery, but in general, the oil refineries in the U.S. are engineered to produce as much gasoline as possible, owing to high demand from the transportation sector. Figure 2.4 shows the composition of output from a typical U.S. refinery.
Product | Gallons |
---|---|
Diesel | 11 |
Heating Oil | 1 |
Jet Fuel | 4 |
Other Products | 7 |
Heavy Fuel Oil (residual) | 1 |
Liquefied Petroleum Gases (LPG) | 2 |
Gasoline | 19 |
Nearly half of every barrel of crude oil that goes into a typical U.S. refinery will emerge on the other end as gasoline. Diesel fuel, another transportation fuel, is generally the second-most-produced product from a refinery, representing about one-quarter of each barrel of oil.
Please read this short but informative background on the PADD system [11] from the Energy Information Administration:
Originally created during World War II for the purposes of regional rationing of gasoline supplies, the Petroleum Administration for Defense Districts (PADDs) are still utilized today to track regional movements of crude oil and (particularly) petroleum products in the United States. While the PADD system might seem a bit archaic, studying the movements of petroleum products between the PADD regions is useful for understanding how these markets are segmented in the United States. Figure 2.5 shows a map of the PADDs with the locations of oil refineries.
The description of the PADD system from the EIA includes some data on inter-PADD shipments of petroleum products (remember that most of these will be gasoline and the “distillates” – diesel fuel and heating oil). Figure 2.6 shows this data in visual form, again using the PADD designation map from the EIA. The figure indicates that there is substantial inter-PADD trade between the eastern states, the Gulf Coast and the Midwest. The Rocky Mountain states and the U.S. West Coast, on the other hand, are largely isolated from the rest of the United States and even from one another. Because of a lack of refinery capacity and pipeline capacity, the U.S. West Coast, in particular, has a gasoline and diesel market that is largely separate from the rest of the country. (This is also due in part to California’s gasoline standards, which are more stringent than in the rest of the U.S.)
Petroleum product pipeline maps are not available in the public domain, but you can view an image online at the following websites.
Gasoline, diesel fuel, and heating oil are volatile commodities – prices fluctuate up and down, sometimes dramatically, as shown in Figure 2.7 which compares spot prices for crude oil with retail prices (i.e., what you would pay at the pump) for gasoline and diesel fuel. Focusing on gasoline and diesel fuel, you will notice that there is some pattern to all of the noisiness in the graphic – prices for both fuels tend to rise starting in the spring and tend to decline in the late summer and fall. This reflects the increased demand for transportation (primarily driving) during the summer months.
Why are prices for petroleum products so volatile? The easy (but not that satisfying) answer is that the price of crude oil can be volatile. In the United States, about 65 cents of every dollar we pay for diesel fuel or gasoline represents the cost of crude oil. The rest represents taxes assessed at the state and federal level. (In European countries, taxes are a much bigger component of gasoline costs.) You can see the volatility visually through the use of the crude oil price graphing tool [15] from the U.S. Energy Information Administration. It allows you to plot the price of West Texas Intermediate (the benchmark crude oil price in the U.S.) on a daily, monthly, and annual basis (depending on how much variability you want to see).
Starting in early 2008, prices climbed very rapidly, reaching nearly $150 per barrel in early July. But by December of that year, the price had dropped by almost 75%! Meanwhile, more recently, the price has dropped to $40 per barrel or even lower! What on earth is going on?
That supply and demand forces were at work is the primary reason, but the nature of the supply and demand forces are worth some time discussing. Suppose that the demand for crude oil or a refined petroleum product were to increase by some amount. How would that demand be met? There are basically two options: existing idle capacity (oil wells or petroleum refineries, for example) could be restarted, or that demand could be met by drawing down crude oil or petroleum product storage. If additional capacity could be restarted, the increase in demand would lead to an increase in the market price due to the higher costs of the capacity brought back online.
But what is the cost of releasing additional supply from storage? Here, we have a tricky economics problem – the actual cost (in accounting terms) of releasing additional supply is probably pretty small. But if you are the owner of some crude oil or petroleum products in storage, if you release supply from storage now, then you can’t release those same barrels again at some point in the future unless you replenish your storage facility (which requires buying oil or petroleum products on the open market and then putting them in storage). So, by releasing from storage now, you are giving up the opportunity for some future profit. This is referred to as “opportunity cost.” (Another term used in the finance field is the “convenience yield.”) Whether the opportunity cost is high or low depends on your expectations about the future price in the crude oil or petroleum products market. If you think demand is going to continue to be high tomorrow, then you face a high opportunity cost by releasing supply from your storage today. If you think that the increase in demand is transient, then that opportunity cost is low (because you think that demand is going to go back down tomorrow).
As it turns out, the opportunity cost of releasing supply from storage is related to the amount of spare production capacity, as shown in Figure 2.8. Generally, as the demand for oil or refined petroleum products rises, more expensive capacity (whether it’s oil wells or refineries) must be brought online, so the market price rises as demand goes up. As you get close to the total capacity constraint (the dashed vertical line in the figure), there is literally no more capacity to be brought online, so the price must rise rapidly in order for one of three things to happen: (i) additional supply is released from storage; (ii) new capacity can be constructed, which of course is expensive; or (iii) demand can go down because the price is too high. This transition from a slow and steady price increase to a regime of substantial volatility is endemic to the market for nearly every energy commodity, and the kink in the supply curve shown in Figure 2.8 is appropriately known as the “devil’s elbow” in the energy commodity business.
Bringing on new capacity will eventually lead to lower prices if demand stays constant – all it does is to shift the capacity constraint and the devil’s elbow in Figure 2.8 further to the right (this is shown in Figure 2.9). But this may have a self-reinforcing effect of keeping new capacity out of the market. Since new capacity in the refining business is generally built in large chunks (so-called “lumpy” investment – no one is going to build a refinery that processes a few marginal barrels of oil per day), if a new refinery is built without an increase in demand for petroleum products, the net effect may be to reduce the price for petroleum products so much that the new refinery is not profitable. Because of the devil’s elbow and the lumpy nature of bringing new sources online, energy commodity markets generally do not have a stable equilibrium like you might find in economics textbooks.
NYMEX (now called the "CME Group") offers futures contracts for several refined petroleum products - the most frequently-traded are heating oil and gasoline. The video below shows you where to find these prices on the CME Group website.
In this video, I'm going to show you where to find information on futures markets for refined petroleum products from the CME Group. So again, we'll start at the CME Group website and we'll click on "Markets." We'll click on "Energy." And then we'll click on "Products." And this will take us down to all of their different energy commodity futures markets. So the two that I'm going to show you are the heating oil futures contract, which says "New York Harbor ultra low sulfur diesel," and the gasoline futures contract, which, here says "RBOB," which stands for "Refiner Blend on Board." So I'll click on the gasoline one first. And again, we have to wait a minute for information to pop up. And you'll see very similar information about prices and trading volumes that you saw for the crude oil contracts. So we have the month of delivery, we have the last traded price, and the change in the price. And here for the gasoline contract, you'll note that the prices are in dollars per gallon, note dollars per barrel, which is why the numbers look a lot different. And then you'll have the prior subtle for the gasoline contract, the opening high and low prices for the day, and the volume of contracts traded. So that was the gasoline futures contract. I'll now show you where to find the heating oil contract, which is the "New York Harbor ultra low sulfur diesel." And so we'll click on that one. And again, the same kind of information shows up. Now the prices here, like the gasoline contract, are in dollars per gallon. So for the September 2020 contract, this this is 1 dollar and 24.31 cents per gallon, not per barrel like in the crude oil contract. So this is how we have used the CME Group website to find information on two of the most commonly traded refined petroleum product markets, heating oil and gasoline.
Oil refineries produce value-added petroleum products from crude oil. Profitability is thus determined by several different variables:
Determining profitability for a specific refinery is very difficult since data on operational and environmental compliance costs are generally not available. A rough measure could be obtained by calculating the cost of crude-oil feedstock (though to do this with precision would require knowledge of the crude blends used in a specific refinery) and comparing that cost with the market value of the suite of products produced at the refinery. This still requires more information than might be publicly available for a typical refinery, and is subject to market conditions for the various products produced.
A useful but simplified measure of refinery profitability is the “crack spread.” The crack spread is the difference in the sales price of the refined product (gasoline and fuel oil distillates) and the price of crude oil. An average refinery would follow what is known as the 3-2-1 crack spread, meaning for every three barrels of oil the refinery produces an equivalent two barrels of gasoline and one barrel of distillate fuels (diesel and heating oil). This ratio of refined product output closely mirrors the composition in Figure 2.4, but remember that the crack spread is only a first-order approximation of how profitable a refinery would be at the margin! The higher the crack spread the more money the refinery will make, so it will be utilizing as much capacity it has available. Inversely, at some lower crack spread prices, it actually may be in the refinery’s best interest, due to costs for the plant, to scale back the amount of capacity utilized.
Calculating the 3-2-1 crack spread typically uses published prices for crude oil, gasoline and distillates. These prices are typically taken from the New York Mercantile Exchange. The NYMEX has traded contracts for crude oil and gasoline but no contract for diesel fuel (the most-produced of the distillate fuel oils). In calculating the 3-2-1 crack spread, prices for heating oil futures are typically used instead. Below is an example of how to calculate the crack spread, using data from 2012.
The crack spread, of course, is not a perfect measure of refinery profitability. What it really measures is whether the refinery will make money at the margin – i.e., whether an additional barrel of crude oil purchased upstream will yield sufficient revenues from saleable products downstream. In reality, existing refineries must consider their refining costs in addition to just the cost of crude oil. These costs include labor (though that is generally a small part of refinery operations); chemical catalysts; utilities; and any short-term financial costs such as borrowing money to maintain refinery operations. These variable costs of refining may amount to perhaps $20 per barrel (depending on conditions in utility pricing and financial markets). In the example above, the true margin on refining would be $6.58 per barrel of crude oil – much lower than the simple crack spread would suggest.
The crack spread tends to be sensitive to the slate of products produced from the refinery. In the example above, we used gasoline and distillate fuel oil (heating oil) because those are two typically high-valued products, and U.S. refineries are generally engineered to maximize production of gasoline and fuel oil.
The crack spread is also sensitive to the selection of the oil price used. In the example above, we used the NYMEX futures price for crude oil, which recall is based on the West Texas Intermediate blend - a fairly light crude oil. Many U.S. refineries, however, are engineered to accept heavier crude oils as feedstocks. If there are systematic differences in the prices of heavy crude oils versus West Texas Intermediate, then the crack spread calculation (while illustrative) may not be sensible for a particular refinery.
The Energy Information Administration recently published a couple of good articles describing how the U.S. refinery fleet has been adjusting to changes in U.S. crude oil production. Not only has the quantity of crude oil produced in the U.S. been increasing rapidly, but the oil coming out of the large shale plays (like the Bakken in North Dakota) is much lighter than the crude oils typically accepted by U.S. refineries.
The first article [16], published on 7 January 2015
The second article [17], published on 15 January 2015
In this lesson, you learned a little bit about the process of refining crude oil (which itself does not have many direct uses or substantial economic value) into end-use products like gasoline and diesel fuel. Most refineries in the U.S. are engineered to maximize gasoline production, leading to the 3-2-1 crack spread method of assessing refinery profitability. You also learned that part of the volatility of petroleum product prices is due to seasonal factors; part is due to swings in the price of crude oil; and part is due to how tight refining or production capacity is. When capacity is tight, small shifts in the supply/demand balance can lead to disproportionately large price fluctuations.
You have reached the end of Lesson 2! Double check the What is Due for Lesson 2? list on the first page of this lesson to make sure you have completed all of the activities listed there before you begin Lesson 3. Note: The Lesson 3 material will open Monday after we finish Lesson 2.
The links below provide an outline of the material for this lesson. Be sure to carefully read through the entire lesson before returning to Canvas to submit your assignments.
There is an old joke from the oil industry that goes something like this:
An oil company executive walks into a bar and sees a wildcatter slouched over the bar, staring into his drink.
“What’s the matter,” says the oil company executive, “another dry hole?”
“Worse,” says the wildcatter, “we found gas.”
For many decades, natural gas was the poor cousin to crude oil. Often found alongside crude oil in reservoirs, natural gas was considered to be a low-value waste product that was often flared or vented into the atmosphere in very large quantities (enough to supply several European countries for an entire year), in order to more easily access the high-value crude. Nowadays, the world has changed. Particularly with the rapid emergence of unconventional natural gas resources (these are often grouped into a catch-all category of “shale gas” but includes natural gas found in sandstone formations, coal beds and other types of geologic formations other than shales), there are lots of perceived opportunities in natural gas. We will discuss shale gas in the next lesson, but first we will walk through an introduction to natural gas as a commodity and the functioning of North American markets for natural gas.
By the end of this lesson, you should be able to:
Aside from the online materials, you will need to access the websites of the EIA and NYMEX (CME Group). The EIA has a nice introductory reading [18] to the world of natural gas as part of their "Energy Explained" series. You will also need to download the following reading from the Lesson 03 module:
This lesson will take us one week to complete. Please refer to the Course Calendar for specific due dates. See the specific directions for the assignment below.
If you have any questions, please post them to our Questions about EME 801? discussion forum (not email), located in the Lesson 00 module in Canvas. I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
Please read the natural gas section [18] of the "Energy Explained" series from the Energy Information Administration before you get started with the material for this lesson.
Like crude oil, natural gas is an energy source based on hydrocarbon chains, but the composition of natural gas is generally different than the composition of crude oil. Natural gas is primarily composed of methane, though some natural gas deposits also contain substantial fractions of other hydrocarbon gases or liquids such as ethane and propane (these are longer hydrocarbon chains that have substantial value as chemical feedstocks). Most gas deposits also contain impurities such as sulfur or other carbon compounds that must be separated prior to the gas being injected into transmission or distribution pipelines. Gas deposits that consist primarily of methane are known as “dry” gas deposits, while those with larger fractions of other hydrocarbons are known as “wet” or “rich” gas deposits.
Unlike oil, natural gas is essentially wedded to its transportation system – without pipelines (and liquefied natural gas tankers, which we’ll discuss later), there is no economical way to get large quantities of gas to market. Moreover, natural gas pipelines generally need to be dedicated assets. Using oil or petroleum product pipelines to move natural gas is not really possible, and moving other products in natural gas pipelines is not possible without completely repurposing the pipeline (and the injection/withdrawal infrastructure on either end). This asset specificity and complementarity between natural gas and the pipeline transportation infrastructure has been a significant factor in the development of the natural gas market. Each has little use without the other.
Natural gas is used in industrial, commercial, residential, and electric power applications. Natural gas demand in the United States has remained virtually flat for the better part of a decade, although, as shown in Figure 3.2, there has been a shift in the composition of that demand, away from industrial utilization and towards the electric power sector. Expectations are that the emergence of low-priced gas supplies from unconventional sources will change the drivers of natural gas in two ways – first, demand among both the industrial and electric power sector is anticipated to increase. Second, surplus natural gas supplies may open up an entirely new export sector for North American natural gas.
Each demand sector has its own intra-annual pattern of natural gas demand. Residential demand, for example, tends to be highest in the winter (because of demand for space heating), while demand in the electric power sector tends to be highest in the summer, due to higher demand for electricity (for air conditioning). Overall, natural gas demand in the United States peaks in the wintertime, with a lesser peak during the summer. A typical set of annual demand profiles is shown in Figure 3.3. There are some shifts occurring here as well, particularly as demand from the electric power sector continues to climb.
Beginning in the mid to late 1990s, the U.S. electric power sector started undergoing a transformation away from building new coal-fired power plants and towards building new natural gas plants. This shift took place for a variety of reasons, including increasingly stringent environmental requirements for power plants and a shift in the market for electricity induced by deregulation and restructuring. This shift towards more gas-fired power generation was associated with a pronounced increase in both the average spot price of natural gas and the volatility of natural gas prices, as shown in Figure 3.4. This association seems to have broken down since 2009 – there has been additional investment in, and utilization of, gas-fired power generation but a pronounced decline in natural gas prices. This trend reflects the influence of additional gas supplies from unconventional deposits (primarily shales and tight sandstone formations) coming online. Referencing our discussion of capacity constraints in the lesson on petroleum refining, the price and utilization trends in Figure 3.4 reflect textbook energy economics – the build-out in natural gas generation represented increased demand without a corresponding increase in natural gas supply, pushing the global demand curve towards the capacity constraint and rapidly increasing prices (some price spikes such as those observed in 2000, 2005, and 2008 are associated with extreme weather events like hurricanes or interruptions to gas pipeline networks, such as an explosion in the Western U.S. in 2000).
Please read Chapter 5 of the van Vactor reading (Flipping the Switch: The Transformation of Energy Markets) before you proceed with the written material here. Sections 5.3 to 5.5 are particularly important. (Registered students can access a PDF file of the van Vactor dissertation in the Lesson 03 module in Canvas.)
The North American natural gas market is structured based on what has been called the “hub” model of gas pricing. In markets with hub pricing, the interaction of supply and demand sets prices at a small number of specific locations. These locations are the "hubs." (In the van Vactor reading and in one of the assignment questions for this week, there is some discussion about the properties of a good "hub" trading point.) The "local" price at any other location or points of consumption in the commodity network is thus the hub price plus the cost of transportation from the hub. Price differences between any two points in the gas pipeline network represent just the cost of transportation between those two points.
We can also describe this using some terminology from the natural gas industry (this terminology is also commonplace in the oil industry). If the hub represents some point where it is easy to obtain natural gas without much transportation, this would be referred to as the wellhead price. The point where the natural gas enters the distribution system for local delivery (see Figure 5.1 from the van Vactor reading) is known as the citygate. The citygate price should generally be higher than the wellhead price because of the transportation cost associated with moving the gas from the wellhead to the citygate. The wellhead price simply represents the market equilibrium for the natural gas commodity, not including any transportation costs.
One implication of the hub pricing model is that the entire gas market should roughly obey the law of one price, which says that if the gas market is working efficiently, then price differences should reflect transportation costs. A change in the wellhead price of natural gas would be reflected everywhere else in the network.
Here is a simple illustration of the law of one price, using the small network shown in Figure 3.5. In this example, p represents the price at a given location and c represents the transportation cost to get from the supply point (wellhead) to a specific location. Gas fields A and B are connected via pipeline to market centers (you can think of these as citygates) 1 and 2. A and B can deliver to both markets, but the transportation cost c is higher to get to market 2 than market 1. Thus, c1 < c2 and in equilibrium, p1 < p2. Thus, gas at Market 1 will be cheaper than gas at Market 2.
Further, in equilibrium p2 = p1 + (c2 – c1), which in words says that the price at Market 2 would be equal to the price at Market 1 plus the transportation cost to Market 2 from Market 1 (the transportation cost to Market 1 from the wellhead is already reflected in the price at Market 1). Using number, suppose that the price of natural gas at the wellhead (either field A or B) was $5 per million BTU ($/MMBTU). The transportation cost to market 1 is $1/MMBTU and the transportation cost to market 2 is $3/MMBTU. If this gas market obeyed the law of one price, then the citygate price at market 1 would be $5/MMBTU + $1/MMBTU = $6/MMBTU. The citygate price at market 2 would thus be $6/MMBTU + ($3/MMBTU - $1/MMBTU) = $8/MMBTU.
One implication of the law of one price is that changes in supply or demand at one location can affect pricing at all locations in the network. Here is another example, again using the same network (see Figure 3.6). Suppose that demand in market 2 were to increase. Suppliers in both fields would increase their offer prices so that p2 > p1 + (c2 – c1). Assuming that there are no constraints on the size of the pipeline, that increased demand at market 2 would bid up the price for all market points in the network since all suppliers would try to move their gas to market 2. (Of course, there is some limit to this redirection of supply since demand in market 2 is finite.) Supply to market 1 would decrease, raising prices in market 1 to restore equilibrium (where the price difference between markets is equal to the difference in transport costs).
It is possible for markets connected by pipelines to depart from the law of one price. The simplest example of how this might happen is if there is a constraint or an interruption on a pipeline. For example, suppose that the pipeline between markets 1 and 2 were to be removed from service, isolating market 2. What would happen to prices at market 1? Market 2? In the assignment for this lesson, you will get some practice looking at the law of one price in the North American natural gas market using data from the U.S. Energy Information Administration.
Every natural gas field within some geographic area has the potential to be a hub pricing point. Locations that are close to producing areas and are also connected to a large number of gas transmission pipelines also have the potential to be good hub pricing points. In general, hub pricing points need to be highly connected to the rest of the network – it should be easy to move gas from the hub point to any other location in the network.
The major hub pricing point in North America is the “Henry Hub,” which is a physical location in Louisiana, indicated in Figure 3.7. Henry Hub is well-connected to the rest of the North American market (as indicated by the flows of gas through North America as shown graphically in Figure 8).
The Henry Hub is also the point at which the NYMEX futures contracts for natural gas are priced. The following video shows how to access natural gas pricing data from EIA and NYMEX.
In this video, I'm going to show you how to find natural gas futures price information on the CME Group website. The process is going to be similar to how we found futures price information for crude oil and refined petroleum products. But natural gas is just in a little bit different place. So we're going to start from the CME Group homepage and we're gonna go to markets, and then we're gonna go to energy. Then when the page loads, we're gonna go to products. And then last time, for the crude oil futures information, we had used the crude and refine section. Now we're going to scroll down and go to the Natural Gas section. And what we want here are the Henry Hub natural gas futures. So remember that the Henry Hub is the location where all natural gas futures contracts in North America are treated. So we're gonna click on Henry Hub. And it's going to bring us to the futures price information page, which is going to look similar to what we saw for other futures contracts for crude oil and refined petroleum products. So we're going to have the month for delivery. We're going to have the last recorded price. And here, the prices are in dollars per million BTUs of natural gas. So this is a very different price unit than we saw for oil or refined petroleum products. And then we have the the prior settle column, which is the last traded price that was recorded. And then we have the open opening price, the high and low price for the day for each particular contract, and the number of contracts traded.
Natural gas as an energy commodity is different than crude oil in several important ways, but markets for natural gas and oil share some important similarities as well. Like crude oil, natural gas is an energy commodity whose prices were gradually deregulated through the 1980s, culminating in full wellhead price deregulation in the early 1990s. Pricing of natural gas is determined by supply, demand, and transportation (pipeline) factors, and the natural gas market has its own distinct North American price benchmark - the Henry Hub - just as crude oil has West Texas Intermediate as its North American benchmark (we will get more into globalization of gas markets in the next lesson). Unlike crude oil, natural gas is used directly in virtually all sectors of the U.S. economy (residences, businesses, industry, power generation), with the exception of transportation. Also, unlike crude oil, natural gas is more homogeneous but more difficult to transport, meaning that trade in natural gas is more closely tied to the availability of pipeline capacity than is trade in other energy commodities.
You have reached the end of Lesson 3! Double check the What is Due for Lesson 3? list on the first page of this lesson to make sure you have completed all of the activities listed there before you begin Lesson 4. Note: The Lesson 4 material will open Monday after we finish Lesson 3.
The links below provide an outline of the material for this lesson. Be sure to carefully read through the entire lesson before returning to Canvas to submit your assignments.
The story of the rapid shift in the North American natural gas supply/demand picture is in some sense captured by two maps created by the Federal Regulatory Energy Commission (FERC) and one from the Energy Information Administration. The links for the maps are below this paragraph. The first map shows the locations of liquefied natural gas (LNG) import terminals in the United States. If you click on the links for each terminal, you will find that prior to the late 1990s/early 2000s there were but a few LNG importing terminals in the U.S. This number skyrocketed alongside the build-out of natural gas fired power generation as industry and policymakers realized that there did not seem to be enough gas on the North American continent, deliverable to U.S. power plants, relative to demand. Analysts quaked in their boots at the thought of power plant gas demand surpassing 4 trillion cubic feet (TCF) per year. The result was a rush to build facilities to import LNG. Fast forward to the past couple of years, and to another set of maps on the FERC website showing the locations of proposed LNG export terminals that would take natural gas produced on the North American continent and export it to global markets in Europe, Asia, and elsewhere. There are currently more export terminals being proposed for approval by FERC than there are operating import terminals. Operators of import terminals, whose facilities are highly underutilized at this point, are also seeking permission from the federal government to retrofit those terminals to start exporting gas. Within the space of less than a decade, the U.S. has gone from famine to feast in terms of its natural gas supplies. The FERC site for LNG [19] has some nice maps for existing and proposed or approved LNG import and export terminals.
The development of unconventional natural gas and oil supplies, most notably gas and oil naturally occurring in shale formations (the EIA has a nice map [20] of where these formations are located), has been a contentious topic. Impacts on the environment and society are still in the process of being better understood. Our focus here will be on understanding how development of these resources may impact markets for energy and energy project decision-making.
By the end of this lesson, you should be able to:
Aside from the online materials, you will need to read three background pieces on economic and political factors influencing the market for shale gas and shale oil.
(*Note for those who are interested: this piece is part of a larger work that I and some other Penn State folks drew up for the Congressional Research Service on water issues in shale gas extraction. This is kind of tangential to the course but I have posted the water piece on Canvas for anyone who would like to read it.)
You should also download a report from the U.S. Energy Information Administration [23] on the potential impacts of a globalized natural gas market on U.S. energy prices.
These articles will be used as the basis for discussion questions at the end of this lesson.
This lesson will take us one week to complete. Please refer to the Course Calendar for specific due dates. See the specific directions for the assignment below.
If you have any questions, please post them to our Questions about EME 801? discussion forum (not email), located in the Lesson 00 module in Canvas. I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
A good introduction to shale gas and hydraulic fracturing [20], including a map of shale energy (gas and oil) producing areas in the United States, can be found on the Energy Information Administration website. The article makes a distinction between “shale gas” and “tight gas.” This is something of an arbitrary classification, since most shale energy formations would be considered “tight” by geologists. (The term “tight” simply refers to the low porosity of the formation – making it more difficult to extract oil and gas.) “Tight gas” in this case would refer to naturally-occurring gas deposits in low-permeability formations other than shale, such as some sandstones. Some unconventional natural gas development in the Rocky Mountains, for example, is occurring in tight sandstone formations rather than in shales per se.
Shale gas and shale oil1 (collectively referred to as shale energy), long considered “unconventional” hydrocarbon resources, are now being developed rapidly. Shale energy represents a substantial fossil fuel resource for heating, electricity generation, transportation fuel and industrial use. The extraction of naturally-occurring oil from tight shales in particular (particularly in the Bakken formation in North Dakota) also has the potential to impact the energy security of the United States by reducing quantities of crude oil and refined petroleum products purchased on global markets. Natural gas (primarily methane) has lower air emissions of many pollutants relative to coal as a fuel source and is seen by many as a “bridge fuel” to a less carbon-intensive energy future. Shale gas deposits vary in size, depth, and quality across the United States. Some deposits occur primarily in one state, such as the Barnett Shale in Texas. Others, such as the Marcellus Shale, underlie multiple states, including New York, Pennsylvania, and West Virginia. The Marcellus Shale is currently considered the largest potential resource of shale gas, and is close to large energy demand centers in the northeast.
As discussed in the EIA article, since the end of the first decade of the 21st century, growth in shale gas has grown from virtually nothing to providing more than 20% of total U.S. natural gas production in less than five years. The Marcellus and Utica shale states have gone from marginally relevant players in the North American gas market to major players in a rapidly globalizing natural gas sector. The growth in shale gas production has already had substantial impacts, including making the U.S. over 95% self-sufficient in gas supply and leading to flow reversals in major pipelines. The U.S. has also started exporting substantial quantities of gas to both Canada and Mexico. All indications are that the resource potential in North America is large enough to sustain increased consumption levels for several decades (the exact size of the economically extractable resource depends on emerging knowledge about geologic formations as well as advances in drilling and extraction technologies).
As this article from the EIA suggests [24], the United States is not the only place on earth with substantial recoverable shale energy reserves. But it is the only place that has seen substantial resource development. The report has a very nice map of assessed global shale resources. You will notice that a number of the major potential shale energy basins outside of the United States are located in Russia or Eastern Europe (an area that has seen some geopolitical unrest in recent years) and in Africa and Australia, which face water constraints. The Economist has a nice article discussing some reasons that shale development in Europe [25] in particular can be expected to be slow (while the article is a good one, it is worth noting that editorially The Economist is very favorable towards shale energy development).
1 In this context, “shale oil” refers to the practice of extracting naturally-occurring petroleum from tight shale formations (sometimes called “oil-bearing shales”) utilizing hydraulic fracturing methods. Shale oil, as used here, is distinct from “oil shale,” which refers to unconventional oils extracted from rock formations through pyrolysis.
Part of the reason for the rapid emergence of shale gas in the North American natural gas market has to do not only with the sheer number of wells drilled but also the production profile for shale gas wells over time. This production profile is referred to as the decline curve. The production profile of typical shale wells entails a rather sharp initial decline in the production rate and, after a few years, a much slower rate of decline. This becomes very important in determining the profitability of shale gas wells versus conventional gas wells. We'll return to this issue in the second half of the course. For now, we'll focus on being able to model the production profile of a shale gas well.
Illustrations of shale gas production decline curves are depicted below in Figure 4.1, for two cases – one with 3.6 billion cubic feet of estimated ultimate recoverable reserves (EUR) and a second with 4.6 billion cubic feet EUR. Production in the initial several years of the well’s productive life declines hyperbolically, and at some point, the production decline levels off reflecting an exponential (constant annual percentage) decline rate.
The equation for hyperbolic decline, assuming that production decline is tracked monthly, is:
(1)where Pt is production during month t, IP is the initial production rate (in million cubic feet per day), and b and D are parameters that are estimated based on the behavior of a particular well or similar wells.
Meanwhile, the equation for exponential decline is:
(2)where all variables are identical to those defined in equation (1), with the exception of Ds, which is the monthly decline rate at the point where production shifts from hyperbolic to exponential tail decline.
As an example, suppose that initial production was 10 million cubic feet per day, and the value of the b and D parameters are 1.8 and 0.58, respectively. The well exhibits hyperbolic decline for the first 60 months and then shifts to exponential decline for the remainder of the well’s life.
We would thus estimate production in month 24 (i.e., at the end of the second year of production) as
million cubic feet per day.
As a second example (and a check that you understand how to use the formula), show for yourself that the production for this same well in month 47 would be 1.137 million cubic feet per day.
To estimate production at month 100 (past the transition point to exponential decline), we would take the production at the very end of the hyperbolic decline period (equal to 0.996 million cubic feet per day, which you should check yourself using the formula, using t = 60) as the initial production for exponential decline, and start counting time periods from that point. The decline parameter Ds for the exponential tail decline would be estimated as the percentage decline at the point where production shifts from hyperbolic to exponential.
We'll walk through this in a couple of steps. We have already calculated that production in month 60 is 0.996 million cubic feet per day. Assuming a continuation of hyperbolic decline, we would find that production in month 61 was 0.987 million cubic feet per day (again, check this yourself).
What we'll use for the decline parameter Ds in the exponential decline is the percentage difference between production in month 60 and month 61. So what we get for the exponential decline parameter is:
(try it yourself).
From month 61 onwards, we would use the exponential decline function to estimate monthly production, with the month 60 production (0.996 million cubic feet per day) as the initial production and Ds = 0.009 as the decline parameter. Moreover, we would start counting periods (the t variable in the exponential decline equation) from the transition point to exponential decline and not from the original initial production level of 10 million cubic feet per day.
Now, back to the question of production in month 100, which would be the 40th month of exponential decline. We use the exponential decline equation with IP = 0.996 million cubic feet, t = 40 months and Ds = 0.009. Production in month 100 would be calculated as:
million cubic feet per day.
As another example for you to check your understanding, show that production in month 83 for this well would be estimated at 0.8098 million cubic feet per day.
Referring again to Figure 4.1, average annual production from this hypothetical shale gas well is over 650 million cubic feet during the first year, about 300 million cubic feet during the second, after 8 years about 130 million cubic feet, and roughly 40 million cubic feet per year after 30 years of production. Note that these numbers are per-year production figures, not cumulative production figures over the life of the well.
These decline curves are not just academic exercises. This sort of analysis is utilized to help determine whether specific shale energy development projects are likely to pay off. The high initial production rate and steep initial decline is characteristic of shale wells (and is a lot different than the slower decline in many conventional gas wells), meaning that most of a project’s revenues – sometimes as high as 80% of total lifetime well revenues – can accrued over the first five to seven years of the well’s producing life. As we will see later in the course when we go through examples of financial analysis for a gas well project, the “front-loading” of production in shale wells can make a big difference in project economics.
North America is not the only location with substantial shale energy resources. Figure 4.2 shows an assessment of global shale gas reserves from the U.S. Department of Energy (the source is a report on global shale energy assessments [24]). While some of these reserves numbers are highly uncertain, due to relatively few test wells having been drilled in many countries, the figure makes the point that the U.S. is not the only player in the global gas game. There is still plenty of natural gas from conventional formations in the Middle East, and trans-oceanic shipments of natural gas are growing. These shipments take place via “liquefied natural gas” (LNG) vessels – the gas is cooled to supercritical temperatures and stored at high pressures for transportation. It is then re-gasified in onshore facilities for injection into the pipeline network for consumption.
You might have noticed in the previous lesson that natural gas prices in North America have been plummeting for a few years now. But prices in Europe and Asia have remained high, as shown in Figure 4.3. In some cases, the price differences are substantial – a North American producer could make a lot of money selling to East Asia if only the LNG export terminals were there! Because of the soft market for natural gas in the U.S. and the potential (even the expectation, some might say) for large-scale exports to higher-priced markets, natural gas may be in the midst of a transition from a primarily continental or regional commodity to one that is traded more globally, much like crude oil.
This has naturally raised some geopolitical concerns about a global natural gas market. Opinions on how the politics of a global natural gas market might impact regional markets has been a topic of substantial debate. A couple of perspectives on this appear in an article from The Economist, citing political scientist David Victor’s research; and a report from the U.S. Energy Information Administration on the potential impact of LNG exports from the United States on natural gas prices in the United States.
Please read the piece from The Economist carefully. The EIA piece is longer, so please just try to get a sense of the conclusions reached by the report and how the analysts came to those conclusions.
Gas and oil extraction from shales has often been called a "game changer," and that may be a vast understatement. The technological ability (and prevailing market conditions) under which unconventional shale energy resources can be profitably extracted and marketed has implications not only for energy consumers, but also for patterns in global energy markets. The U.S. is back to being a leading oil producer and has started to export large quantities of natural gas to Canada and Mexico. Given the energy-importer status of the U.S. just a few years ago, this is a momentous shift. The distribution of shale energy resources is different than conventional energy resources, at least within North America - there are more energy resources closer to major east-coast markets, for example, which facilitates both domestic consumption and exporting. It has also contributed to volatility in energy commodity prices, particularly a precipitous decline in the price of natural gas. Part of the reason for this has to do with differences in extraction patterns between conventional and unconventional wells. Relative to conventional oil or gas wells, hydraulically fractured oil/gas wells in tight geologic formations tend to have very high initial production rates with very rapid declines, followed by (we believe) a prolonged period of moderated production. Because of the high initial production levels, the thousands of unconventional wells that have been drilled are producing at higher levels than a similar number of conventional wells would.
You have reached the end of Lesson 4! Double check the What is Due for Lesson 4? list on the first page of this lesson to make sure you have completed all of the activities listed there before you begin Lesson 5. Note: The Lesson 5 material will open Monday after we finish Lesson 4.
The links below provide an outline of the material for this lesson. Be sure to carefully read through the entire lesson before returning to Canvas to submit your assignments.
The electric utility sector used to be a safe - but boring - place to build a career. It was often said that utility stocks belonged in the portfolios of "widows and orphans" because their returns, while never extravagant, were always very stable. That was the past - the present is very different. The combination of restructuring and deregulation, emergence of new technologies and increasing awareness of the environmental impacts of electricity generation, transmission and distribution have left the industry with substantial challenges, all the same time trying to keep costs down and service reliable. Electricity is often described as facing a "trilemma" of issues - keeping electricity affordable, reliable, and clean. Any two of those would not be difficult to handle, but all three together have thrust the industry into a period of rapid change that it has not seen in nearly a century.
This is an exciting story, and the electricity industry has certainly become much more dynamic. But before we get into restructuring, deregulation, and the economic challenges associated with greening the electricity grid, we will first have a look at the basic industrial structure of electricity supply and delivery; and the basic economics of electric power generation in particular.
By the end of this lesson, you should be able to:
For those seeking more detail in the process of rate-of-return regulation for energy utilities (we will discuss primarily electric utilities in this lesson but many of the same principles would apply to gas distribution utilities as well), the following readings are recommended:
This lesson will take us one week to complete. Please refer to the Course Calendar for specific due dates. See specific directions for the assignment below.
If you have any questions, please post them to our Questions about EME 801? discussion forum (not email), located in the Lesson 00 module. I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
The United States consumes a bit less than four trillion kilowatt-hours of electricity each year, with the electric sector as a whole representing more than $350 billion in retail sales (that's a few percentage points of total U.S. gross domestic product). The biggest drivers of electricity demand are population, economic activity, the weather, and daily patterns of human activity. The first two factors affect electricity demand over periods of years or longer (see Figure 5.1, which shows that electricity demand in the U.S. has basically grown each year except during major recessions). The weather and human activity patterns (e.g., it is warmer in the summer than in the winter, and people tend to move around during the day and sleep at night) influences electricity demand from hour to hour and from day to day.
Fossil fuels dominate electricity supply in the United States, although there are some regional variations (for example, most electricity in the Pacific Northwest is provided through hydropower; this is true of a number of other countries as well, such as Norway). Until very recently, coal provided more than 50% of all electricity generated in the U.S. But increasing environmental constraints on coal utilization and a decline in natural gas prices have induced a rapid shift since 2010, with coal and natural gas on nearly equal footing in terms of overall electricity supply.
Regional variations in fuel utilization are part of the reason that electricity prices vary widely across the U.S., as shown in Figure 5.2.
Generating stations vary widely in size, corresponding to their intended use. Most nuclear generators are well over 1,000 MW, and there are many coal and natural-gas power plants of this size. These larger facilities are designed to be operated continuously for long periods of time and are used to meet "base load" demand - demand that needs to be satisfied nearly every hour of the day. Demand spikes or "peaks" are normally supplied using smaller generators, ranging in size from less than 1 MW to around 200 MW, that can be turned on and off with relative ease and speed. Many of these are simple-cycle combustion turbines that run on gas, oil, or coal, but hydroelectric facilities are often used as "peakers." Levels of demand intermediate between baseload and peak (called "swing demand" or "shoulder demand") are met with generators that are several hundred megawatts in size; these units are almost always fossil fuel turbines.
The following two videos explain in more detail the patterns of electricity demand and supply in the United States:
So, this is basically seasonal electricity demand by hour for the PJM system. So, it's the mid-Atlantic and sort of Western grid that we're a part of. And so, basically, what you can see is that in the fall and the spring are kind of boring for the electricity industry. Everybody can just kind of go to sleep. And demand tends to be higher in the summer and the winter.
For the PJM system as a whole, it sees its highest level of electricity demand in the summer. There are some areas of the US that are what we call winter-peaking. They see their highest level of electricity demand in the winter. And so, like Pennsylvania was a winter-peaker for decades, up until sort of about the last 10 or 15 years. Actually the Northwestern corner of Pennsylvania is still winter-peaking. But you know, places like Minnesota are winter-peaking states.
The demand for electricity and the cost of providing electricity also varies by the time of day. And so, the PJM system in the summer tends to see a peak in the sort of mid to late afternoon. In the winter, the daily pattern of electricity usage changes a little bit. Where you have kind of a double peak, where there is a peak in the morning and then there's another peak in the afternoon.
OK. So, we have a mix of electricity generation capacity that is primarily fossil fuels. So, the pie chart on the left is our capacity to generate electricity. And so, it's about 70% coal and natural gas. So, these are our power plants. But if you look at the actual electricity that's generated, our coal plants and our nuclear plants are used a lot more intensively than, say, are our natural gas plants or especially our petroleum-fired plants. We have some oil-fired capacity in the US, but it almost never gets used. And there's a good chance that a lot of it will be retired within the next 15 years or so because of environmental reasons primarily. But still, about 70% of the electricity that's generated comes from fossil fuels, primarily coal and natural gas. Another 20% is nuclear. And then there are things like petroleum, hydro, biomass, wind, solar, all of these things. Those count for a much smaller percentage of our electricity generation.
[Student] Do you have a handle on-- so, obviously, the electric generation capacity is not entirely used and has this flexibility to change out the fraction of which each particular generation source makes up of what's generated. Do you have an idea of what the capacity factor is of the overall grid, and how much of the capacity we're actually using overall?
So, the annual capacity factor is - I'd say the annual capacity factor is probably somewhere close to two thirds.
[Student] Oh, that low? Really?
Well, that's average. Right. Yeah, but what's really driving that is that there's a big chunk of capacity that we only use when electricity demand is really high. So, we have a lot of what we call peaking plants. So, in PJM, for example, 15% percent of all of the generation capacity is used in less than 100 hours a year. And so, the amount of capacity that we have to have on hand to accommodate peak electricity demand during the summertime is fairly substantial.
So, within fuel types, coal plants and nuclear plants tend to have higher capacity factors. Natural gas plants tend to have pretty low capacity factors, because a lot of those are used for peaking.
[Student] Why does the hydroelectric drop? Because I know coal and nuclear, they're pretty much static and so, they can't vary. But wouldn't hydroelectric be the same?
Not necessarily, for two reasons. One is that a good amount of the hydro generation in the Pacific Northwest what we call run of river, which means that there's no storage. So, there is very limited ability to control how much water passes through dams, and therefore, how much electricity is generated.
The second reason is that a lot of hydro capacity where there is storage is used for something called load following. That's kind of like peak of demand. So, it's not producing electricity at a constant rate all the time. But rather, the electricity that is produced goes up and down in response to fluctuations in demand. So, that's why.
[Student] Are a lot of hydro plants pumped hydro, or do they just not run and let it fill up slowly when they're off?
So, there are a lot of pumped hydro plants. The biggest hybrid dams are generally not pumped storage, because you'd need a really big pump for something like that.
So, the biggest hydro dams, most of which are located in the Pacific Northwest, either have a marginal amount of storage behind them or are run of river.
If you go up to Canada, on the other hand, their big hydro dams have years' worth of water stored behind them. So, there's a big difference in the way that Canada and the US operate their big hydro dams.
[Student] And just having a dam doesn't imply storage.
Right.
[Student] We have to pump it the other way, too.
You would either have to pump it the other way or you'd have to have a big reservoir behind the dam.
The U.S. power transmission grid is highly integrated with grids in Canada and Mexico, so the "U.S. power grid" is perhaps more appropriately called the "North American Power Grid." The North American grid itself is split into three sub-grids that are referred to as "interconnects." The Eastern Interconnect includes states and provinces east of the Rocky Mountains (more or less); the Western Interconnect includes states and provinces west of the Rocky Mountains; and the Texas Interconnect is limited to most of the state of Texas. While there are some weak connections between the three interconnects, they essentially function separately. The National Public Radio has a very nice website called "Visualizing the U.S. Electric Grid" [30] on which you can look at the existing grid relative to fossil and renewable generation resources and also understand how the grid will have to change to support the development of large-scale renewable energy (primarily wind and solar).
In most industrialized countries, electric power is provided by generating facilities that serve a large number of customers. These generating facilities, known as central station generators, are often located in remote areas, far from the point of consumption. The economics of central station generation is largely a matter of costing. As with any other production technology, central station generation entails fixed and variable costs. The fixed costs are relatively straightforward, but the variable cost of power generation is remarkably complex. We will examine each of these in turn.
The fixed costs of power generation are essentially capital costs and land. The capital cost of building central station generators vary from region to region, largely as a function of labor costs and "regulatory costs," which include things like obtaining siting permits, environmental approvals, and so on. It is important to realize that building central station generation takes an enormous amount of time. In a state such as Texas (where building power plants is relatively easy), the time-to-build can be as short as two years. In California, where bringing new energy infrastructure to fruition is much more difficult (due to higher regulatory costs), the time-to-build can exceed ten years. Table 5.1 shows capital cost ranges for several central-station technologies. Although the ranges in Table 5.1 are quite wide, they still mask quite a bit of uncertainty in the final cost of erecting power plants.
Operating costs for power plants include fuel, labor and maintenance costs. Unlike capital costs which are "fixed" (don't vary with the level of output), a plant's total operating cost depends on how much electricity the plant produces. The operating cost required to produce each MWh of electric energy is referred to as the "marginal cost." Fuel costs dominate the total cost of operation for fossil-fired power plants. For renewables, fuel is generally free (perhaps with the exception of biomass power plants in some scenarios); and the fuel costs for nuclear power plants are actually very low. For these types of power plants, labor and maintenance costs dominate total operating costs.
In general, central station generators face a tradeoff between capital and operating costs. Those types of plants that have higher capital costs tend to have lower operating costs. Further, generators which run on fossil fuels tend to have operating costs that are extremely sensitive to changes in the underlying fuel price. The right-most column of Table 5.1 shows typical ranges for operating costs for various types of power plants.
Technology | Capital Cost ($/kW) | Operating Cost ($/kWh) |
---|---|---|
Coal-fired combustion turbine | $500 — $1,000 | 0.02 — 0.04 |
Natural gas combustion turbine | $400 — $800 | 0.04 — 0.10 |
Coal gasification combined-cycle (IGCC) | $1,000 — $1,500 | 0.04 — 0.08 |
Natural gas combined-cycle | $600 — $1,200 | 0.04 — 0.10 |
Wind turbine (includes offshore wind) | $1,200 — $5,000 | Less than 0.01 |
Nuclear | $1,200 — $5,000 | 0.02 — 0.05 |
Photovoltaic Solar | $4,500 and up | Less than 0.01 |
Hydroelectric | $1,200 — $5,000 | Less than 0.01 |
Because of the apparent tradeoff between capital and operating cost, comparing the overall costs of different power plant technologies is not always straightforward. Often times, you will see power plants compared using a measure called the "Levelized Cost of Energy" (LCOE), which is the average price per unit of output needed for the plant to break even over its operating lifetime. We will discuss LCOE in more detail in a future lesson - it is an extremely important (and often-used) cost metric for power plants, but it has its own problems that you will need to keep in the back of your head.
Irrespective of technology, all generators share the following characteristics which influence the plant's operations:
Technology | Ramp Time | Min. Run Time |
---|---|---|
Simple-cycle combustion turbine | minutes to hours | minutes |
Combined-cycle combustion turbine | hours | hours to days |
Nuclear | days | weeks to months |
Wind Turbine (includes offshore wind) | minutes | none |
Hydroelectric (includes pumped storage) | minutes | none |
The minimum run time and ramp times determine how flexible the generation source is; these vary greatly among types of plants and are a function of regulations, type of fuel, and technology. Generally speaking, plants that are less flexible (longer minimum run times and slower ramp times) serve base load energy, while plants that are more flexible (shorter minimum run times and quicker ramp times) are better-suited to filling peak demand. Table 5.2 and Figure 5.3 show approximate (order-of-magnitude) minimum run times and ramp times for several generation technologies. It is important to realize that, in some sense, these are "soft" constraints. It is possible, for example, to run a nuclear plant for five hours and then shut it down. Doing this, however, imposes a large cost in the form of wear and tear on the plant's components.
The cost structure for transmission and distribution is different than for power generation, since there is basically no fuel cost involved with operating transmission and distribution wires (and their associated balance-of-systems, like substations). At the margin, the cost of loading a given transmission line with additional electricity is basically zero (unless the line is operating at its rated capacity limit). Capital cost thus dominates the economics of transmission and distribution.
For nearly one hundred years, the fundamental building block of the electric power sector was the vertically-integrated utility, regulated by the public utility commission in the state(s) in which the utility operated. Roughly speaking, the electric power supply chain has three links (shown in Figure 5.4): generation, transmission, and distribution. Generators are electric power stations that produce electricity by various means, including the burning of fossil fuels or waste products, harnessing kinetic energy of water and wind, and nuclear fission. The various generators, which are often located large distances from consumption centers, connect to a high-voltage transmission network. Closer to the point of consumption, the transmission network is connected (through a series of step-down transformers) to a lower-voltage distribution network. A second series of transformers connects individual customers to the distribution network.
The electric power sector has long been viewed as having economies of scale and of scope. The term "economies of scale" means that average and marginal costs of production decline as the output of firms increases - in other words, situations where larger firms are more efficient than smaller firms. Firms that exhibit economies of scale, no matter how much they produce, are often termed "natural monopolies." The term "economies of scope" in this case means that one firm can provide generation, transmission, and distribution service more efficiently than separate firms providing each type of service. Such a type of firm is referred to as a "vertically integrated" firm. Economies of scale were the justification for granting electric utilities franchise monopolies, while economies of scope were the justification for the continued vertical integration of firms in the industry. With electric-sector reform in the U.S., the assumption ,of economies of scale has been questioned in the generation business, but the "wires" segments of the supply chain (transmission and distribution) are still considered to exhibit economies of scale and are thus still tightly regulated.
The emergence of economically viable small-scale or "distributed" generation has, in some places, begun to upend traditional assumptions regarding economies of scale in generation and also the extent to which distribution of electricity could be a competitive business. We won't discuss those issues as much in this course, but if you are interested in learning about these types of disruptive technologies, AE 862 devotes an entire semester to this topic.
Electric energy is currently generated by two types of firms. The first type is the traditional vertically-integrated utility. These firms generate power to sell to their customers or to sell on the open market. The second type is the non-utility generator, also called an independent power producer (IPP) or merchant generator. These firms typically do not have any customers who consume electricity; they simply generate power and sell it to utilities that do have customers. In a competitive market for electricity, IPPs are likely to be financially successful only if they can produce power at costs lower than prevailing market prices, or below the cost that the utility charges.
Electricity restructuring has changed the utility business model substantially in areas of the U.S. where it has been enacted. The details of restructuring are left to the next lesson, but the map in Figure 5.5 will give you some idea of areas of North America that have actively engaged in electricity restructuring versus those that have resisted restructuring and competition in favor of the traditional model of the regulated and vertically-integrated electric utility. Areas that have established "Regional Transmission Organizations" as shown in the map are considered to have engaged in some degree of electric industry restructuring.
Although the electricity industry was dominated by the vertically-integrated utility for nearly a century, the beginnings of the industry were very different. After the opening of Edison's 1882 Pearl Street generation station in New York City's financial district, the industry emerged in an era characterized by intense competition between Edison, his rivals, and municipal cooperatives. Edison's direct current (DC) power required generation stations to be located within a mile of the electric lights. A decade later, Edison merged his company with a firm expert in alternating current (AC) technology to form General Electric. AC power was both more efficient for powering motors than DC and could be shipped long distances, allowing large central generation stations to supply many customers.
By 1910, a consensus emerged that vertically-integrated companies should be granted monopoly status within a geographical area in exchange for regulation that obliged them to serve consumers at prices and terms that were regulated by the respective states in which these companies operated, but gave them essentially guaranteed rates of return that could attract capital. Power companies supported state regulation as a barrier to entry of potential competitors and as a way to reduce the high costs of managing a patchwork of local regulation bulwark against a patchwork of local regulation. This ushered in a decades-long era that has come to be known as the "utility consensus."
Most utility regulation occurs through a process known as "cost-based ratemaking" or "rate of return regulation." Under rate of return regulation, the utility sets prices (rates that are paid by retail customers) to recover the costs associated with providing service, plus a level of profit determined by the state public utility commission. It is important to remember that this regulation in the United States occurred at the state level, not the federal level. For many decades, the federal government played a relatively minor role in the regulation of specific utility companies. The federal government did play a major role in the widespread electrification of the American countryside, in part through the establishment of federal policies such as the Rural Electrification Act and federal power projects such as those managed by the Tennessee Valley Authority and the Bonneville Power Administration. As we will find out in Lesson 6, the process of "deregulation" has actually involved a substantial shift in electricity regulatory authority from the states to the federal government.
The following short videos provide some more explanation on rate of return regulation.
So, this public utility operated under what we call the regulated monopoly or regulated franchise model. the regulated monopoly or regulated franchise model was granted by the state, a geographic territory over which it had a monopoly to produce electricity, to transmit electricity, and to sell electricity. So, not only was this utility a vertically integrated firm, it was a vertically integrated monopoly. So, for seven decades, the utility could not have any competition for any part of its business. And, in exchange for that, in exchange for being given this state-sanctioned monopoly, the utilities agreed to have their prices and their profits regulated by a state-level entity called a public utility commission. So, the public utility commission effectively set the price that the utility could charge for electricity. It also set the price that the utility could charge to different types of customers. So, Penn State had a different price than Seth Blumsak. And the public utility commission was ultimately the entity that set those prices. The public utility commission also decided what investments the utility would be allowed to make. Or not really make, but what investments the utilities could force their customers to pay for. And so, the utilities basically had this monopoly. And they were very highly regulated, and their operations were very highly regulated. And basically, they had one job, sort of two jobs. The first job was that they had to supply whatever electricity was different. So, they couldn't tell people, I will not supply you with electricity. And the other thing was that they basically had to operate the system reliably. So, they had to operate the system in a way that you wouldn't have a lot of blackouts. So, their responsibility was, basically, serve all the customers and don't break the system.
[Question] That's all utilities?
So you can see how the system had its ups and downs. It created a very stable economic climate for the utility business. And it was able to borrow money and attract investment at very attractive rates of return. Because if the utility has a guaranteed profit margin of 10% and there's basically no risk to that, if you're a potential investor who's going to lend the utility money, all of a sudden, this looks like manna from heaven. And so it created a very stable economic climate for the utilities. And it allowed, basically, a fairly rapid electrification of the US.
So, variation in fuel costs, those variations were generally allowed to be passed on directly to the consumers. So, if the price of all the fuel the utility had to buy doubled, then the generation portion of your electricity bill would double. So, utilities were generally allowed to do that. Most state regulators broke up the utilities costs into capital investments--in wires, and substations, and power plants, and things like that--and operational costs, which were things like labor costs and fuel. And basically the deal that the utilities had with the public utility commission was that they could pass through all of their operational costs to consumers. So if fuel prices increased, electricity prices would go up. But they weren't allowed earn any profit on fuel. But they were allowed to earn profit on stuff that they had built. This was called rate based. So, every time the utility built a new power plant, or it build a new transmission line, then it would get to earn a profit off of that investment.
So, this regulation created a very stable climate for a long time. You could see where we might start to have some problems. One, if the utility's profits went up every time it built something, then the utilities got in this mindset where their business model was building stuff. Second, when you're spending other people's money, then you're maybe not necessarily as careful with it as you might be otherwise. But what economists would call incentive problems existed for a long time. But they really didn't manifest themselves until the 1970s, when we basically had two things happen at the same. One, we had the energy crisis of the 1970s. And fuel prices increased dramatically. At the time, 20% of US electricity was generated with oil. So, when oil prices went up dramatically, that impacted the cost of generating electricity. The second thing that happened was for environmental and other reasons, the utilities in many states were halfway forced and halfway decided to make large investments in nuclear power plants. And as it turned out, the utilities did not know how to build or operate nuclear power plants terribly well. And so, there were all sorts of delays, and cost overruns, and cost escalations with nuclear power plants. And because of this deal that the utilities had with their regulators, all or most of those costs ultimately had to be paid by utilities customers.
The determination of utility rates (the prices that are paid for electricity by you and me, or by other types of electricity customers such as businesses) occurs through an administrative process known as a "rate case." Rate cases are typically held on a periodic basis (say, every few years) or when special circumstances dictate. Examples of such special circumstances might include the utility wanting to introduce a new type of electric rate, or the utility wanting to raise rates due to an increase in fuel prices.
Please have a look at Rate of Return Regulation [31] by Mark Jamison (A PDF file is available to registered students in the module in Canvas.) that describes the process of utility ratemaking in some detail.
I won't repeat all of the details from the Jamison reading here, but there are a few important points and implications of utility ratemaking to hit on.
First, the ratemaking process involves determining which utility costs can be passed through to customers directly and which activities would qualify for inclusion in the "rate base," which determines how profitable the utility is. The sum of these two types of costs is called the "revenue requirement," which is how much money the utility would need to bring in during a given time period in order to cover its costs and retain its profit. Jamison's way of writing the revenue requirement is common; here is another version of that same equation:
Where RR is the revenue requirement, O indicates total operating costs (fuel, labor, maintenance, taxes - these are pass-through costs on which the utility is not allowed to earn any profit); V is the accounting value of the utility's assets; D is the accounting depreciation on those assets; and r is the rate of return that the utility is allowed to earn. The term V-D is often referred to as the "rate base." We will discuss some of these accounting issues later in the course but there are two important aspects to the ratemaking equation. Operating expenses are generally passed through to customers with few challenges, and utilities are not allowed to profit on operational decision-making. On the other hand, the larger the rate base, the more profit that the utility earns. So the structure of utility ratemaking encourages capital investment.
Second, the ratemaking process involves decisions regarding how to allocate costs to different types of customers (e.g., residential, commercial, industrial). It can be hard to identify exactly how much of a utility's costs are attributable to serving specific customers or even types of customers. This is because the power grid is effectively a common resource used by all customers to facilitate reliable delivery of electric power. The process of allocating utility costs among the various customer types is thus done by negotiation (as part of the rate case) rather than by some magical calculation. Historically, large commercial and industrial customers have had rates that are lower than average while small commercial and residential customers have had higher rates. This has led to some accusations of "cross-subsidization" and the politicization of electric rate setting.
Third, the ratemaking process must also determine the rate of return for the utility. Generally, utilities are entitled to earn a "fair" rate of return. While the term "fair" is not all that meaningful in most economics discussions, it has a very specific definition in the public utility world. A "fair" rate of return is one that allows the utility to raise whatever capital it needs to make needed investments in infrastructure. Basically, the utility has to be profitable enough (and sufficiently low risk) that investors will be willing to lend it money. We will explore this concept in a future lesson, but the requirement for a "fair" rate of return implies that there is some connection between the cost of capital for the utility (here, "cost of capital" refers to the interest rate rather than the actual cost of any capital investment) and the rate of return that its regulator needs to allow. The rates of return allowed by public utility commissions varies, but a return on the rate base of 8% to 10% per year is a good representative figure.
Electricity is a unique commodity in that it cannot generally be stored at a large scale at reasonable cost, so the entities that operate the transmission grid need to make plans and take actions to keep supply and demand matched in "real-time" - from minute to minute and second to second. Except in circumstances where regulations have prohibited such actions, the electric utility industry has historically broken the process of meeting customer demands into three stages. The sections below discuss each of the three stages under the assumption that all relevant plans and actions are undertaken by a regulated vertically-integrated electric utility.
Days to years in advance, utilities have acquired stores of electric generation capacity through several mechanisms including the construction of new generation plants (and supporting transmission lines), long-term forward contracting, and smaller quantities of spot-market purchases. Construction of new generation plants typically needs to happen years in advance, while forward and spot market contracts may be signed for periods varying from years in advance to a day or hour in advance. Utilities operating in states that have by and large retained the regulated and vertically-integrated structure submit resource plans to their respective state regulatory commissions through a process known as "integrated resource planning."
Days to hours in advance, utilities make "commitment" decisions to have certain quantities of existing generation plant available to produce electricity when the need arises. In some cases (such as nuclear plants or hydroelectric facilities), the commitment decisions are made many weeks or months in advance. The decision to commit a generating unit to be able to produce electricity means that the utility is willing to incur fixed costs related to unit startup in order to have that generating plant ready and available to produce electricity in real time.
Generators with large start-up costs or long minimum-run times (such as large combustion turbines and nuclear power plants) cannot run optimally if their output is determined using a single-period analysis (a "period" in the electric power industry usually refers to a length of time of about an hour). Instead, their operation must be scheduled over a longer time horizon of days or even weeks. A utility would need a forecast of demand weeks in advance before turning on a generator with a long minimum run time. They would need to look at the demand forecast over that period of weeks and decide the lowest-cost mix of generation plants that would meet the demand. This decision is referred to as the unit-commitment decision, and it is aptly named. The utility would, weeks in advance of consumption, have to commit to using a given mix of generation resources. If demand were to deviate wildly (and unexpectedly) from the utility's forecast, it would be faced with the decision of curtailing output from some of its generators (if it over-estimated demand) or utilizing generators with fast start times but high costs (if it under-estimated demand).
Close to the real-time point of consumption (i.e., minutes ahead of real time), LSEs issue dispatch orders to existing generation facilities, setting specific output levels for each. LSEs also utilize some generation facilities to provide "ancillary services," which represent very fast (sub-second to minutes) automatic adjustments to keep supply and demand in balance. The process of dispatching generation plants to meet customer demands within a specified control footprint is variously known as "economic dispatch" or "optimal power flow." The terminology here is suggestive that generation plants are dispatched in such a way as to minimize the total cost of delivered electricity.
The economic dispatch algorithm actually implemented is conceptually pretty simple. Begin by turning on (or "dispatching") your generation source with the lowest "marginal" or operational costs. Have that generation source increase output until all load is met or the generation source hits its capacity constraint, whichever comes first. If the cheapest generation source hits its capacity constraint before all the demand is met, the second-cheapest generation source is turned on, and increases power until either it reaches its capacity constraint or all the demand is met. The process continues by successively turning on more expensive generation sources until the entire load is served. It is important to realize that the economic dispatch algorithm does not consider any fixed costs of power plants - only those costs directly associated with plant operation (for fossil-fired power plants, this would be primarily fuel).
Economic dispatch is best illustrated using an example. Suppose that you were an electric utility that had three generators that could be used to meet electricity demand, as shown in Table 5.1. The fixed costs would represent land leases, fuel/transmission interconnections and any other costs that do not depend on the level of output. The marginal cost measures the amount of money that it costs each plant to produce one megawatt-hour of electric energy.
Plant Name | Capacity (MW) | Fixed Costs | Marginal Cost ($/MWh) |
---|---|---|---|
Colchester | 100 | 50 | 10 |
Warren | 75 | 25 | 30 |
Burke | 20 | 10 | 60 |
Suppose that demand during some time period was 150 MWh. The process of determining economic dispatch would follow three steps.
First, order the plants from lowest to highest marginal cost, which will tell you which plants would be utilized to produce electricity given some level of demand. This picture is called the "dispatch curve," shown in Figure 5.6. Each step in the dispatch curve represents one power plant. The height of the step represents the marginal cost of electricity production for each plant. The width of each step represents the capacity of each plant. Note that the x-axis represents total capacity for the entire utility, not the capacity for any individual power plant.
Name | Capacity Range | Net Capacity | Marginal cost ($) |
---|---|---|---|
Colchester | 0-100 | 100 | 10 |
Warren | 100-175 | 75 | 30 |
Burke | 175-195 | 20 | 60 |
In this case, to meet the demand of 150 MWh, the utility would dispatch Colchester at its maximum output (100 MWh) and would meet the remaining 50 MWh of demand using the Warren plant. The Burke plant (the most expensive of the lot) is not dispatched.
We can use the dispatch information to calculate several cost measures for our hypothetical electric utility.
Total Cost is the sum of all costs incurred to meet electricity demand. To obtain total cost, we would multiply quantity produced times marginal cost for each plant, and then add in the fixed costs for all plants. Note that we need to add fixed costs even for plants that do not produce anything.
Average Cost is Total Cost divided by the total amount of electricity produced.
System Marginal Cost is the marginal cost of the generating plant that meets the last MWh of electricity demanded. The system marginal cost is also referred to as the "system lambda."
We can now calculate these cost measures for our example. The economic dispatch of all power plants in the system is:
The "marginal generator" (the plant used to meet the last MWh of demand) is the Warren plant. So, the system lambda would just be the marginal cost of the Warren plant, which is $30/MWh.
Note the different units for total cost ($) versus average cost and marginal cost ($/MWh).
Here is an exercise that you can try on your own. Take the same three-generator example and suppose that demand was 190 MWh. Verify that:
The electric power grid is one of the most complex engineering feats ever designed by humans. While the supply chain for electricity - generation, transmission, and distribution - sounds simple enough, the reality is that the grid is effectively operated and overseen by a large patchwork of overlapping entities, market structures, and regulation. The material in this lesson is really laying the groundwork for the next few lessons to come, in which we will look more in-depth at the ways in which the electricity industry is changing due to pressures from industry restructuring and regulatory initiatives designed to increase the amount of renewably-generated electricity on the grid. The economics of generation are driven primarily by capital and fuel costs; and in general there is a tradeoff between the cost to build a power plant and the cost to operate it. Plants that are expensive to build tend to be cheaper to operate, and vice versa. Moreover, there is not a single power plant technology that can meet all the needs of the system at all times. To prevent blackouts, electric grid operators need a combination of low-cost "base load" power plants that can be expected to run reliably for extended periods of time; and "shoulder" and "peaking" plants that are more expensive to operate but operationally flexible enough to allow system operators to adapt to constantly changing electricity demand. The economics of transmission and distribution are not as nuanced, since these assets are almost entirely capital-intensive, with little or no operational costs. For nearly one hundred years, the companies that provided generation, transmission, and distribution service were vertically-integrated public utilities that operated under state regulation. About half of U.S. states have, in the past couple of decades, shifted away from this system in favor of a more competitive and regionally-based industry model.
You have reached the end of Lesson 5! Double check the What is Due for Lesson 5? list on the first page of this lesson to make sure you have completed all of the activities listed there before you begin Lesson 6. Note: The Lesson 6 material will open Monday after we finish Lesson 5.
The links below provide an outline of the material for this lesson. Be sure to carefully read through the entire lesson before returning to Canvas to submit your assignments.
The regulated electric utility model that we studied in Lesson 5 served the industry and consumers well for nearly a century. Costs and prices fell nearly continuously, and service expanded to nearly every corner of the U.S. While other countries experimented with their own regulatory systems, for the most part, these involved national electric utilities, not the regulated private enterprises found in the United States.
Beginning in the 1970s, the trend of falling prices suddenly reversed, and the decades of technological progress in power generation slowed. The rapid pace of technological advance had masked some fundamental problems with the way in which electric utilities were regulated. Dissatisfaction among electricity consumers grew, paving the way for the grand experiments in electricity deregulation and restructuring that continue to this day. This lesson and the following lesson will provide an in-depth discussion of how these new markets are structured and will describe some fundamental changes in how the price of electricity is determined. The basic market concepts will be discussed in this lesson, while the next lesson will examine how these markets are changing in response to the emergence of large-scale renewable power generation.
By the end of this lesson, you should be able to:
This lesson will take us one week to complete. Please refer to the Course Calendar for specific due dates. See specific directions for the assignment below.
If you have any questions, please post them to our Questions about EME 801? discussion forum (not email), located in the Start Here! module. I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
Technology improvements and economies of scale caused electric rates to fall until 1970, which made industry and residential customers happy. In 2002 dollars, price fell from about $5.15 per kWh in 1892 to about 9.7 cents per kWh in 1970. The highly regulated structure of the electric utility business created a stable environment for expansion of access to electric power. Beginning in 1970, however, prices for electric power began to rise sharply: 320% in current dollars from 1970 to 1985 (28% in inflation adjusted prices), as shown in Figure 6.1.
The oil embargo of 1973-74 affected both the production and consumption of electricity. Petroleum fueled 17% of US electricity generation at that time, and so, the curtailment in supply reduced the ability to generate needed electricity. (Today, petroleum supplies only 2%). The uncertain supply and much higher prices (which jumped from $3 in September 1973 to $12 a barrel in January 1974) had a devastating effect on the economy directly and the demand for electricity indirectly. Electric power generation in the United States in the period 1949 through 1973 increased exponentially (at the compounded rate of 7 ¾ % per year), and linearly since 1973 (with annual increases of 70 billion kilowatt-hours per year, the equivalent of roughly 15 new large generation plants per year), as shown in Figure 6.2. The transition from exponential to linear growth was unanticipated, and led to the decade-long over-capacity of generation discussed above. The industry went through a difficult time after 1973 adjusting to the slower rate of growth utilities had in startup, construction, or planning a doubling of generation capacity. No one knew how long the decreased demand would last and, given the penalties of delaying or canceling construction, much of this capacity was built. If the industry had continued to grow exponentially, it would be almost twice as large today as it actually is.
Price increases were driven by higher petroleum, coal, and natural gas prices, rate increases to cover the cost of over-capacity, particularly in generation, reduced rates of technology improvement, and by investments in coal and nuclear generation plants of a size which stretched available technology beyond cost effectiveness. In the 1970s, nuclear power plants promised low cost, environmentally benign power. Many utilities started construction of new nuclear plants. Unfortunately, the utilities learned that building and operating nuclear plants was much more difficult than operating dams and fossil fuel plants. As a result, there were vast cost overruns in construction, e.g., Diablo Canyon and the Washington Public Power System, and poor plant operation, e.g., the fuel rod meltdown at Three Mile Island.
Building excess capacity, eliminating some plants that were in planning or early stages of construction, and having nuclear plants that turned out to be much more expensive than estimated and which didn’t operate well generated tremendously high costs. Public Utility Commissions and consumers resisted putting these costs into the rate base, since they raised costs (and prices) markedly. However, since the PUCs had generally approved the investments, there was little alternative to reimbursing the utilities for the majority of these costs.
The electricity price increases came at a time when deregulation of the airlines (1978), railroads and the trucking industry (1980) were reducing prices substantially. The price increases upset consumers and generated intense political pressure to hold down electricity rates. One proposed answer was deregulating electricity, fostering competition and lower prices
As a reaction to the 1973 energy crisis, Congress passed the 1978 Public Utilities Regulatory Policies Act (PURPA), eliminating, at least in principle, protected monopolies for electric generation. The success of early non-utility generation facilities and of deregulation in other industries led to provisions in the 1992 Energy Policy Act encouraging wholesale and retail choice in electricity. Over the next decade, nineteen states and the District of Columbia enacted some form of electric restructuring legislation.
As shown in Figure 6.1, the price of electricity rose 50% from 1970 to 1975. The “minor” issues in the Rate of Return Regulation (RORR) structure that had been ignored now became major problems. The defects had been hidden by rapidly evolving generation technology that continually lowered generation costs.
Clearly, there were problems with RORR. One alternative would have been to reform the regulatory process, as was originally tried in the United Kingdom. The other alternative, which was embraced first within Chile and the United Kingdom before spreading to the United States, was a package of reforms that would change the way that some parts of the industry were regulated, and loosen regulations in other parts of the industry.
Both of the terms “deregulation” and “restructuring” are sometimes used interchangeably to describe the changed in the electric power industry starting in the 1990s. While the terms are not interchangeable, both in fact are correct. The generation segment of the electricity supply chain has been progressively deregulated, with power plants competing with one another in many areas of the world to provide service to a regional grid operator. Electric transmission has largely been restructured, not deregulated, with transmission regulation shifting from a local to regional scale. In the U.S., this has meant that the regulation of electric transmission has shifted from state to federal authorities. Electric distribution has, with few exceptions, retained the same regulated structure that it has always had. This “last mile” of wires service is provided by a public utility that is granted a local monopoly in exchange for rate of return regulation by the state. In states that have undertaken deregulation and/or restructuring, the “electric utility” is in many cases just a distribution company.
Please read the first two sections from Blumsack, “Measuring the Costs and Benefits of Regional Electric Grid Integration [34],” which describes in more detail the process of deregulation/restructuring and some of the important U.S. federal policy initiatives that have pushed the industry towards its new state. It is important to realize that some aspects of restructuring have essentially arisen from federal initiatives and some from state initiatives. The most important aspects of electricity restructuring are:
Not all areas of the U.S. have adopted deregulation or restructuring. Figures 6.3 and 6.4 show the areas of the U.S. that have adopted competitive wholesale markets for electric energy generation and those individual states that have taken on power-sector reforms at the retail level.
Regional Transmission Organizations (RTOs) are non-profit, public-benefit corporations that were created as a part of electricity restructuring in the United States, beginning in the 1990s. Some RTOs, such as PJM in the Mid-Atlantic states, were created from existing “power pools” dating back many decades (PJM was first organized in the 1920s). The history of the RTO dates back to FERC Orders 888 and 889, which suggested the concept of the “Independent System Operator” (ISO) to ensure non-discriminatory access to transmission systems. FERC Order 2000 encouraged, but did not quite require, all transmission-owning entities to form or join a Regional Transmission Organization to promote the regional administration of high-voltage transmission systems. The difference between RTO and ISO is, at this point, largely semantic. Order 2000 contains a set of technical requirements for any system operator to be considered a FERC-approved RTO.
RTOs are regulated by FERC, not by the states (i.e., RTO rules are determined by a FERC-approved tariff and not by state Public Utility Commissions) and membership in a RTO by any entity is voluntary. Including Texas (which is technically outside of FERC’s jurisdiction), there are seven RTOs in the U.S., covering about half of the states and roughly two-thirds of total U.S. annual electricity demand. Each RTO establishes its own rules and market structures, but there are many commonalities. Broadly, the RTO performs the following functions:
In many ways, RTOs perform the same functions as the vertically-integrated utilities that were supplanted by electricity restructuring. There are, however, a number of important distinctions between RTOs and utilities.
The set of NETL power market primers zipped file [36] contains more information on specific differences between the various RTO markets.
The separation of ownership from control in RTO markets raises some interesting complications for planning. RTOs have responsibility for ensuring reliability and adequacy of the power grid. They must perform regional planning, meaning that they determine where additional power lines and generators are required in order to maintain system reliability. But RTOs generally cannot require that member companies make any investments. They generally rely on a variety of market mechanisms to create financial incentives for member firms to invest in generation. Many transmission investments needed for reliability are eligible for fixed rates of return set by FERC.
Operating a power system requires making decisions on time scales covering fifteen orders of magnitude prior to real-time dispatch, as shown in Figure 6.5.
Since the RTO does not own any physical assets, it must effectively sign contracts with generation suppliers to provide needed services. The market mechanisms run by the RTO are used to procure generation supplies needed to maintain reliability. Once generation supplies are procured by the RTO, it can dispatch generation as needed to meet demand.
RTOs run three types of markets that enable them to manage the power grid over time scales ranging from cycles (one cycle = 1/60th of one second) to several years in advance of real-time dispatch, as shown in Figure 6.6.
Capacity Markets are meant to provide financial incentives for suppliers to keep generation assets online and to induce new investment in generation. Capacity markets are generally forward markets to have generation capacity online and ready to produce electricity at least one year ahead of time. PJM’s capacity market is run three years ahead of time. For example, a generator that participates in the PJM capacity market in 2012 is effectively making a promise to have generation capacity online and ready to produce in 2015. Capacity markets are thought to be necessary because prices in other RTO markets are not always sufficiently high to keep existing generation from shutting down or to entice new generators to enter the market. Not all RTOs have forward capacity markets. Texas, for example, does not operate a capacity market. We will discuss capacity markets in more detail in Lesson 7.
Energy Markets are perhaps the most well-known of all market constructs run by RTOs. Like capacity markets, energy markets are forward markets but are used by the RTO to ensure that enough generation capacity is online and able to produce energy on a day-ahead (24-hour ahead) to one-hour-ahead basis. RTOs run two types of energy markets. The first, the “day-ahead market” is used to determine which generators are scheduled to operate during each hour of the following day (and at what level of output), based on a projection of electricity demand the following day. The second, the “real-time market” is somewhat poorly named; this market is used by the RTO to adjust which generators are scheduled to run on an hour-ahead basis. A better term for the real-time market (which is used in some cases) would be “adjustment market” or “balancing market” since supplies for this so-called real-time market are actually procured one day in advance (but after supplied are procured through the day-ahead market). The prices prevailing in the day-ahead and real-time markets are the most commonly referenced and quoted of all markets run by the RTOs.
Ancillary Services Markets allow the RTO to maintain a portfolio of backup generation in case of unexpectedly high demand or if contingencies, such as generator outages, arise on the system. There are many different types of ancillary services, corresponding to the speed with which the backup generation needs to be dispatched. “Reserves” represent capacity that can be synchronized with the grid and brought to some operating level within 60, 30, or 15 minutes. “Regulation” represents capacity that can change its level of output within a few seconds in response to fluctuations in the system frequency. Ancillary services are increasingly important for renewable energy integration, so we will discuss those markets in Lesson 7.
Suppliers may participate in multiple markets. For example, a 100 MW generator could offer 80 MW to the day-ahead market, 10 MW to the real-time market, and 5 MW each to the regulation and reserves markets. The generator would earn different payments for each type of service provided to the grid. Thus, while the day-ahead or real-time price is often referred to as “the” market price of electricity, in reality there are many different prices in the RTO market at any given time, each representing a different type of service offered to the RTO.
Because the RTO operates its entire system in an integrated way, even though its footprint may encompass many different utility territories and transmission owners, RTO type markets are sometimes referred to as “power pools” or simply “pools.” The following video contains more information about how the pool-type markets are structured, using the largest pool type model (PJM, in the Mid-Atlantic U.S.).
PJM - ah, they used a different model, a different market model, which is sometimes called a pool. And the way the PJM, or pool market, works is that individual generators commit their capacity to the market, or the pool. The pool decides which generators are going to run at which hours. The way that generators are allowed to structure supply offers is much more limited in the pool than it was in the power exchange. In the pool, the supply offers from generators basically just reflected variable cost. And so, capital cost recovery and other things were made up through separate payments or separate markets.
So, the pool-type market has really become the dominant market model used in the US today. The pool-type market was, more or less, what FERC's standard market design looked like. The markets that are run by regional transmission organizations are all run as what we call uniform price auctions. They way that a uniform price auction works is that suppliers submit supply offers to the RTO and then these offers are aggregated to form a system supply curve. Originally, demand was assumed to be perfectly inelastic - which is the same thing as saying a vertical demand curve - so, whatever demand happened to be, that's what it was and they would pay whatever price the market happened to produce. At the point where this vertical demand curve crossed the offer curve, or the supply curve, was the market clearing price, or what we call the system marginal price for that particular time. I'll show you a couple of different pictures of this.
This is approximately what the supply curve looks like for the PJM market. A couple of things about this supply curve are that, first of all, it has this hockey stick sort of shape. Over a large range of electricity demand, the market price would not vary all that much - or the cost of generation would not vary all that much. In the example that I have here, if demand is a hundred thousand megawatt hours, which is this vertical line about here, then the point where that line crosses the supply curve is what sets the market price for that particular time period. And so, in this particular time period, at a demand of a hundred thousand megawatt hours, the price would be eighty dollars per megawatt hour. And if the price were a little bit lower or a little bit higher than that, say eighty thousand instead of a hundred thousand, the market clearing price would not change all that much. But as demand increases, in this case past a hundred and or a hundred and twenty thousand megawatt hours, then the cost of the capacity you'd have to use in order to satisfy that demand starts to rise very sharply. So, this point, the part of the supply curve over here where the price starts to rise very steeply is sometimes called the "devil's elbow."
If we have demand of a hundred and forty thousand megawatt hours, then, all of a sudden, the price would jump to $160 per megawatt hour. So, we have increased demand by 40% and we have doubled the price. At high levels of electricity demand, the small changes in demand can produce much larger than proportional changes in the market price.
The last generator that is dispatched to meet electricity demand is the generator that basically sets the market price, or the system marginal price. And the way these uniform price auctions work is that that system marginal price, or that market price, is paid to every generator whose supply offer was lower than the market price. So, if you're a generator, your profit during any time period is equal to whatever the market the price is minus your marginal cost. Sometimes, you'll hear this market referred to as a scarcity rent. The way that these uniform price auctions work, if you've take Econ 101, is not dissimilar to just about any other market in the world.
There is a variation on the uniform price auction called the pay as bid auction. This is used in the UK but not in the United States. The idea behind the pay as bid auction is that there's still a system marginal price that is produced through the market, but rather than each generator earning the system marginal price, each generator who is accepted into the auction, each generator that bids below what the system marginal price turns out to be, is payed whatever their bid was. As I said, this is used in the UK but not in the US. The belief in the US is that if you switched to a pay as bid system, this would simply give generators incentives to manipulate their bids. So, if you're a generator that has a marginal cost of five dollars per megawatt hour and you believe that the market price, or the system marginal price, will be fifty dollars per megawatt hour, in the uniform clearing price option, you would earn fifty minus five equals forty-five dollars per megawatt hour in profit. In the pay as bid auction, if you actually submitted a bid that was equal to your marginal cost, then you would earn five dollars per megawatt hour. The thought is, in the US, is that this would give this particular supplier an incentive to inflate their bid up to whatever they thought the market clearing price would be. So, the pay as bid is not really used in the US.
So, basically, all of the centralized wholesale markets run by RTOs follow this uniform price option format.
If you are interested, the following two videos discuss electricity market structures that are alternatives to the pool:
Areas of the US that do not have these centralized electricity markets do have wholesale trade of electricity between generators and buyers, between utilities and so on and so forth. These are what we call bilateral markets, and these bilateral markets actually existed long before electricity deregulation, long before the PJM electricity market and so on and so forth. The first of these was actually set up in the western US in the late 1980s. It was called the Western Systems Power Pool. It's actually still in place - so, the market structure that was set up in the 1980s in the western states is actually still used there today.
The way that bilateral markets work is that large volumes of electricity, sometimes called bulk power, is traded between utilities, between buyers and sellers, at whatever price the two counter parties agree upon. And how these trades actually occur is that potential counter parties actually call each other up on the phone. So, I might be, like, Bonneville Power Administration and I could call somebody at SMUD (The Sacramento Municipal Utility District) and say, "I've got spare electricity to sell; do you want to buy any?" This is basically how the bilateral markets work. And sometimes, the energy that's purchased through these bilateral markets is coupled with access to the transmission grid so that you could actually deliver the electricity. This is sometimes called "firm" energy, as opposed to non-"firm" energy, which is sold without access to the transmission grid. These bilateral markets still exist today. Now, a lot of the bilateral trading takes place over electronic trading platforms. People don't call each other up on the phone as much; instead they might log in to some electronic trading platform, kind of like e-Bay or something like that. When we talk about centralized electricity markets, just keep in mind that places in the US that do not have centralized electricity markets still have trading of electricity through these bilateral markets.
In the mid-1990s, California and a handful of Mid-Atlantic states (the "PJM" market, which stands for Pennsylvania, New Jersey, and Maryland), as part of a broader electricity market restructuring effort, set up much more coordinated, much more formal mechanisms for trading electricity. The California model was a little bit different from the PJM model, but the big similarity that they shared was that rather than people calling each other up on the phone, there would be a centralized exchange setup, kind of like something analogous to the New York Stock Exchange or the New York Mercantile Exchange, where crude oil futures are traded. And rather than buyers and sellers trying to find each other by calling each other up on the phone, everybody would go to this centralized market, or this centralized exchange
For these centralized electricity markets….I'd said that starting in the 1990s there were two competing models for setting up electricity markets: one in California and one in the PJM states. What California did was it tried most directly to replicate a financial exchange, like the New York Stock Exchange, for buying and selling electricity. And so, it opened The California Power Exchange. So, California's biggest utilities, they all agreed to go along with this, and they agreed to buy all of their power from the power exchange. So, how the power exchange worked was this:
First, individual generators decided whether or not they wanted to enter their supplies into the power exchange. This is called decentralized unit commitment. It was up to the generators whether or not they wanted to make their capacity available to the exchange. The bids from the generators, there was an overall cap, so a maximum amount, that the generators could bid, but beyond that, the generators could more or less do whatever they wanted. They could submit whatever type of supply offer they wanted to. And then, just as generators had to submit supply offers, the three large California utilities, they also had to submit demand bids. So, the idea was that you would have so many suppliers rushing into this market to serve so much electricity demand that there would be sort of this vigorous competition that would ensue, sort of like in an auction.
So, the way that the power exchange market worked was that every hour, suppliers would submit supply offers, which is the blue curve here, and the supply offer indicated how much generating capacity the supplier was willing to make available to the power exchange at what price. So, when you put all of these together you got what we call a supply curve or an offer curve. And that's in the blue. On the demand side, the utilities had to submit the amount of electricity that they were willing to buy from the exchange at some price. And so, that's the pink curve right over here. And when these purchase offers were aggregated together you got kind of a California system demand curve. And where the supply and demand curve met determined the price and quantity at which the electricity market would clear. So, in this example, and this is a picture from the late 1990s, the point where supply equals demand, the market cleared for this particular hour at about 32,500 megawatts of generation capacity would be utilized to serve 32,500 megawatt hours of energy demand, and the market clearing price would be $190 per megawatt hour. So, this was repeated every single hour of every single day. And in the event that there wasn't enough electricity purchased in through the power exchange to serve all of California's electricity demand, there was a secondary balancing market that would ultimately equate demand and supply on a sub-hourly basis. So, this was basically how the power exchange model worked, and the California power exchange lasted for about two and a half years, and after the California power crisis, as California's sort of screeching halt to electricity deregulation, the power exchange was shut down. Partially because of California's sort of dismal failure, the power exchange model has not really been replicated anywhere else except in Alberta, up in Canada, where they actually have had a power exchange type market for well over a decade. They seem to like it, but in California, it didn't work so well.
Virtually all RTO markets are operated as “uniform price auctions.” Under the uniform price auction, generators submit supply offers to the RTO, and the RTO chooses the lowest-cost supply offers until supply is equal to the RTO’s demand. This process is called “clearing the market.” The last generator dispatched is called the “marginal unit” and sets the market price. Any generator whose supply offer is below the market-clearing price is said to have “cleared the market,” and is paid the market-clearing price for the amount of supply that cleared the market. Generators with marginal operating costs below the market-clearing price will earn profits. In general, if the market is competitive (all suppliers offer at marginal operating cost) the marginal unit does not earn any profit.
The uniform price auction is illustrated in Figure 6.7. There are five suppliers, each of which offers its capacity to the market at a different price. These supply offers are shown in Table 6.1. Here we will assume that supply offers are equal to the marginal costs of each generator, but in the deregulated generation market suppliers are not really obligated to submit offers that are equal to costs. The RTO aggregates these supply offers to form a single market-wide “dispatch stack” or supply curve. Demand is represented by a vertical line (the RTO assumes that demand is fixed, or “perfectly inelastic” with respect to price). In this case, demand is 55 MWh. Generators A, B, C and D clear the market. Generator E does not clear the market since its supply offer is too high. The market-clearing price, known as the “system marginal price (SMP)” would be $40 per MWh. Generators A, B, C, and D would each be paid $40 per MWh. Generators A, B and C would earn profit. Generator D is the marginal unit so it earns zero profit.
Supplier | Capacity (MW) |
Marginal Cost ($/MWh) |
---|---|---|
A | 10 | $10 |
B | 15 | $15 |
C | 20 | $30 |
D | 25 | $40 |
E | 10 | $70 |
Let’s calculate the profits for each of our generators. Remember that each generator that clears the market (in this case, it would be A, B, C, and D; E does not clear the market) earns the SMP for each unit of electricity they sell. Total profits are thus calculated as:
Profit = Output × (SMP – Marginal Cost).
Since the SMP in our example is equal to $40, profits are calculated as:
Firm A profit = 10 × (40 – 10) = $300
Firm B profit = 15 × (40 – 15) = $375
Firm C profit = 20 × (40 – 30) = $200
Firm D profit = 10 × (40 – 40) = $0
Firm E profit = 0 × (40 – 70) = $0.
Note in particular that Firm D, which is the “marginal unit” setting the SMP of $40/MWh, clears the market but does not earn any profits. We will come back to this case when we discuss capacity markets in Lesson 7.
Unlike petroleum pipelines or natural gas pipelines, which in most cases require compression to maintain sufficient pressure to move product from the wellhead to the sales point, the cost of moving additional electrons through a network of conductors is essentially zero, since there is no fuel cost for “compression” in electrical networks.
(Actually, this isn’t quite true for a couple of reasons. First, transmission lines aren’t perfect conductors, so there is some resistance in the network. Because of this resistance, some of the electricity injected into the transmission grid by power generators is lost as heat between the power plant and the customer. The magnitude of these “resistive losses” is around 10% in a modern power grid like North America’s. What this means is that the system has to generate more power than is actually demanded, to account for these losses. For example, if demand in a system with 10% losses is 100 MW, then the system will need to generate around 111 MW. Second, the transmission grid needs to maintain a certain voltage level, and maintaining this level sometimes involves output adjustments at the power plant location, which does impose an economic cost on the system. In the discussion here, we will ignore these two costs to focus on the effects of transmission congestion.)
In the market-clearing example that we just went through, all suppliers were paid the market-clearing “system marginal price,” and the problem did not really say anything about where the generators or customers were located, or what the transmission network looked like. We just assumed that power produced at any one plant could be delivered to a customer at any location in the transmission network. Thus, electricity markets should exhibit the law of one price, just as we saw in natural gas networks.
While electricity markets should exhibit the law of one price, the reality is that they often do not. Figure 6.8 shows a contour map of electricity prices in the PJM electricity grid on a warm, but not terribly hot day in June 2005. You can find more up-to-date maps [37] on the PJM website [38]. You can also find a really nifty animation of LMPs [39] from the MISO market [40]. If you look carefully at the MISO animation, you may see some negative prices, meaning that someone using electricity gets paid to use it, and someone producing electricity actually pays to generate power! While this may seem strange, it is actually a natural economic outcome of a market with fluctuating supply and demand, and no storage. We'll get to the negative-price phenomenon later in the course.
Getting back to our picture of prices in the PJM system, you can see that prices in the western portion of PJM’s grid are an order of magnitude lower than prices in the eastern portion of the grid. This means that a power plant located in, say, Pittsburgh could make a lot of money by selling electricity to customers in Washington, DC. The demand from Washington should bid up the price in Pittsburgh as more generation in Pittsburgh comes online to serve the Washington market. This is just what we saw in our study of natural gas markets. But this doesn’t happen in electricity. Why?
The basic answer is “transmission congestion.” Conductors cannot hold an infinite number of electrons. At some point the resistive heat would just cause the conductor to melt. (Remember that materials expand when they get hot. When you hear about power lines “sagging” into trees, that is what’s going on.) So, power system engineers place limits on the amount of power that a transmission line can carry at any point in time. When a line’s loading hits its rated capacity, we say that the line is “congested” and it can’t transfer any additional power. So, the system has to find another way to meet demand, without additionally loading congested lines. Usually this involves reducing output at low-cost generators and increasing output at higher-cost generators. This process, known as “out of merit dispatch,” imposes an economic cost on the system.
After correcting for transmission congestion by adjusting the dispatch of power plants, the cost of meeting demand in one location (e.g., Washington) may be substantially higher than the cost of meeting demand in another location (e.g., Pittsburgh). The locational marginal price (LMP) at some particular point in the grid measures the marginal cost of delivering an additional unit of electric energy (i.e., a marginal MWh) to that location.
We will illustrate the concept of LMP using the two-node network shown in Figure 6.9. Node 1 has 100 MWh of demand, while node 2 has 800 MWh of demand. There are two generators in the system – one at node 1 with a marginal cost of $20/MWh and one at node 2 with a marginal cost of $40/MWh. A transmission line connects the two nodes. For the purposes of this example, assume that either of the generators could produce 1,000 MWh, and we will ignore any issues with transmission losses. Both generators submit supply offers to the electricity market that are equal to their marginal costs.
Suppose that the transmission line could carry an infinite amount of electricity. How should the system operator dispatch the generators to meet demand? Either generator by itself could meet all 900 MWh of demand in the system, so the system operator would dispatch generator 1 at 900 MWh and generator 2 would not be dispatched. The SMP would be equal to $20/MWh.
Now, suppose that the transmission line could carry only 500 MWh. In this case, the system operator could supply all of the demand at node 1 with generator 1, but only 500 MWh of demand at node 2 with generator 1. The remaining demand at node 2 would need to be met by dispatching generator 2. Since there is plenty of generation capacity left at generator 1, if demand at node 1 were to increase by 1 MWh, it could be met using generator 1 and thus the LMP at node 1 is $20 per MWh. If demand at node 2 were to increase, that demand would need to be met by increasing output at generator 2. Thus, the LMP at node 2 would be $40 per MWh.
Next we'll look at how much the Generators are paid and how much the Customers pay. Customers at Node 1 are charged $20 per MWh, while customers at Node 2 are charged $40 per MWh for all energy consumed. Generator 1 is paid $20 per MWh, while Generator 2 is paid $40 per MWh for all energy produced. You may notice here that while customers at Node 2 pay $40 per MWh for all 800 MWh that they consume, some of that energy was imported from Node 1 at a much lower cost. This is not a mistake - it's how the LMP pricing system works in the US.
The RTO collects revenue from the customers as follows:
From Node 1: 100 MWh × $20 per MWh = $2,000
From Node 2: 800 MWh × $40 per MWh = $32,000
Total Collections = $34,000
The RTO pays the generators as follows:
To Node 1: 600 MWh × $20 per MWh = $12,000
From Node 2: 300 MWh × $40 per MWh = $12,000
Total Collections = $24,000
The RTO collects excess revenue in the amount of $34,000 - $24,000 = $10,000. It collects this excess revenue because customers at Node 2 pay $40/MWh for all energy consumed, while some of those megawatt-hours only cost $20/MWh to produce.
This excess revenue is called "congestion revenue." In general, when there is congestion in the network and LMPs differ, then there will be some congestion revenue. We will discuss later in the course what the RTO does with this extra revenue. For now, just remember that whenever there is transmission congestion and LMPs at different nodes of the network aren't equal, the RTO will usually wind up with some congestion revenue.
As an exercise for yourself, calculate the LMPs and congestion revenue under a second two-node example. Demand at Node 1 is 5 MWh, while demand at Node 2 is 10 MWh. The transmission line connecting Nodes 1 and 2 has a capacity of 5 MWh. The marginal cost of Generator 1 is $10/MWh while the marginal cost of Generator 2 is $15/MWh.
You should find that the LMP at Node 1 is $10/MWh; the LMP at Node 2 is $15/MWh; and that 5 MW of power is exported from Node 1 to Node 2 on the transmission line.
You should also find that congestion revenue is equal to $25.
While rate of return regulation served the electricity industry and consumers well for many decades, it had embedded in it a number of incentive problems that made for inefficient operation of electric utilities and some behavior by regulators that was not always in the public interest. In particular, rate of return regulation gave electric utilities incentives to over-invest in capital - the so-called "Averch Johnson" effect. Regulators, in turn, labored under incomplete information regarding the state of the utilities that they were supposed to regulate. Even on their best day, the utilities knew more about the power grid than their regulators, so utilities could more easily get regulators to approve investments. Public utility commissioners are aided by large technical staffs, but at the end of the day, these staff members had expert knowledge but still had less data and system information than the utilities.
Recognizing that generation could be competitive over large regional areas, and that transmission and distribution needed to retain some form of regulation, the restructuring of the electricity industry consisted of the following fundamental changes:
In the United States in particular, restructuring has proceeded unevenly. Around half of U.S. states now have a power sector that has been restructured in some way. The other half still operate following the regulated utility structure that we discussed in Lesson 5.
You have reached the end of Lesson 6! Double check the What is Due for Lesson 6? list on the first page of this lesson to make sure you have completed all of the activities listed there before you begin Lesson 7. Note: The Lesson 7 material will open Monday after we finish Lesson 6.
The links below provide an outline of the material for this lesson. Be sure to carefully read through the entire lesson before returning to Canvas to submit your assignments.
For many decades, the companies that operated electricity grids had to do so with two objectives in mind: reliability (keeping the lights on) and cost (keeping electricity affordable). Despite the complexities of the electric grid, these two objectives were relatively simple to meet, in part because the regulatory system shifted a lot of the financial risk away from companies and onto ratepayers. It isn’t that hard to keep a power grid reliable by overbuilding it.
This lesson builds on the previous two by adding a third objective to power grid operations – lowering the environmental footprint of power generation by adding large amounts of low-emissions generation resources. This is, very broadly, sometimes called the “renewable integration” challenge. The technical challenges in maintaining the balance of supply and demand are immense, but our focus is on the economic challenges, particularly in the context of electricity deregulation.
By the end of this lesson, you should be able to:
There are a lot of good resources that describe the renewables integration challenge. We will lean on external readings more heavily in this lesson than in previous lessons.
This lesson will take us one week to complete. Please refer to the Course Calendar for specific due dates. Specific directions for the assignment below can be found within this lesson.
If you have any questions, please post them to our Questions about EME 801? discussion forum, located in the Start Here! module. I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate. You can also contact the instructor through Canvas Conversations.
Before you get into the details of the lesson, please have a look at the following story from the New York Times about wind farms in Vermont:
Intermittent Nature of Green Power Is Challenge for Utilities. [42]
As you are reading the story, think about the following questions (these would apply to solar as well as wind, but since the story is specifically about wind, we’ll pick on wind to frame the questions). Clearly the integration of wind into the Vermont electric grid (which is interconnected to the rest of the New England grid) has not gone as smoothly as we might have hoped.
If you have any reactions to the above questions, or to the story more generally, please feel free to post to the Lesson 7 discussion forum early in the week!
Please read the “Overview” section (through page 14) from " Managing Variable Energy Resources to Increase Renewable Electricity’s Contribution to the Grid. [49]"
The terms “renewable energy resources” and “variable energy resources” are often used interchangeably when applied to electric power generation. The two are, in fact, not the same, although there is some overlap. The term “Variable Energy Resource” (VER) refers to any generation resource whose output is not perfectly controllable by a transmission system operator, and whose output is dependent on a fuel resource that cannot be directly stored or stockpiled and whose availability is difficult to predict. Wind and solar power generation are the primary VERs, since the sun does not shine all the time (even during the day, clouds and dust can interfere with solar power generation in surprising ways) and the wind does not blow all the time. In some cases, hydro electricity without storage (so-called “run of river” hydro) could be considered a VER since its output is dependent on streamflows at any given moment. VERs are, in some sense, defined respective to so-called “dispatchable” or “controllable” power generation resources, which encompasses fossil-fuel plants, nuclear, and some hydro electricity (with reservoir storage). The VER concept is pretty vague if you think about it – after all, coal or natural gas plants can sometimes break, so don’t have perfect availability. Fuel supply chains can also be disrupted for fossil plants. In addition, the “variable” aspect is, at least in concept, nothing new for system operators. Demand varies all the time and system operators are able to handle it without substantial negative impacts.
Keep in mind that the “variability” of VERs is different over different time scales. Figures 7-9 of “Managing Variable Energy Resources to Increase Renewable Electricity’s Contribution to the Grid” show how wind and solar are variable over time scales of days or fewer. Figure 7.1, below, shows wind energy production in the PJM RTO every five minutes over a period of two years. This figure shows how wind production varies seasonally and also inter-annually, with windier and less windy years.
Variability with respect to electricity demand is also important. If you think about it, electric system operators don’t really care about variability in demand or in VERs per se – what they care about is being able to match supply and demand on a continuous basis. If variability between VERs and demand were perfectly synchronous, so that VERs would increase (or decrease) in output right at the moments when demand increased or decreased in output, then there would be no problems. If VERs and demand are anti-correlated or perfectly asynchronous, that poses more of a challenge. Part of the challenge with the wind in Vermont, as you have read, is that the wind tends to blow most strongly at night when there is less electricity demand and the power plants that are serving that demand are inflexible “base load” units that are difficult to ramp down. This is typical of wind – Figure 7.2 shows (normalized) wind and electricity demand by season in the PJM RTO. Solar, on the other hand, is much more highly correlated with electricity demand (at least on a day-to-day basis).
Whether or not you believe that there is anything “special” about VERs, electric grid operators around the world are rethinking the way that they plan and operate their systems and markets in order to accommodate various forms of policy that are promoting investment in VERs. Read the introductory portion of Chapter 4 of Energy Storage on the Grid and the Short-term Variability of Wind and pp.15-20 of “Managing Variable Energy Resources to Increase Renewable Electricity’s Contribution to the Grid,” which discuss various types of strategies that grid operators have been using to manage large-scale VERs on the grid. As you may have gathered from the Vermont article, some of the control strategies used most often by grid operators (such as manually curtailing wind energy output during periods when supply exceeds demand) have also been the most controversial.
More generally, there are three economic challenges relevant to VER integration, each of which we will discuss in a bit more detail.
Many restructured electricity markets offer power generators payments for the capacity that they have ready to produce electricity, not just for the electricity that they actually produce. For example, if you owned a 100 MW (100,000 kW) power plant and the capacity payment was $10 per kW per month, you would earn $12 million per year regardless of how much electricity you actually produced. In areas that have them, capacity payments have become a major portion of a generator’s revenue stream.
Capacity markets are a little bit odd. Almost no other market for any commodity, anywhere in the world, has them. (There is a capacity market for natural gas pipelines, but it is operated differently than electricity capacity markets.) In markets for other non-storable goods, like hotel rooms and seats on airplanes, any fixed costs of operations are rolled into the room rate or ticket price. If there is enough unused capacity, then it will exit the market (the hotel will close, or the airline will go bankrupt). Yet, this doesn’t really happen in electricity.
There are three features in electricity markets that have justified the need for capacity markets – two are regulatory interventions. The first goes back to 1965, when a large blackout crippled much of the U.S. northeast [53]. Rather than being saddled with additional regulations imposed by an angry government, the electricity industry adopted a set of then-voluntary standards [54] for reliability. (The standards are now mandatory.) One of these standards was called the “installed capacity” requirement, which stated that electric grid operators needed to control more capacity than they thought they would need to meet peak demand. For example, if annual peak electricity demand in your system was 100 MW, you might be required to own or control 120 MW of capacity, just in case your power plants broke; your estimate of demand was wrong; or some combination of both. This extra capacity requirement is sometimes called the “capacity margin.”
The second intervention goes back to California’s power crisis of 2000 and 2001. California was one of the first states to deregulate its electricity industry, and it got a lot wrong in the deregulation process. In particular, as firms like Enron found out, the markets created in California were ridiculously simple to manipulate. Prices could easily be driven to levels 100 times higher than normal. Following California’s debacle, other regions did push forward with deregulation, but no one wanted to be the next California. So, virtually all restructured markets instituted “caps” (or upper limits) on electricity prices. In PJM, for example, the price cap is set at $1,000 per MWh. This is the maximum amount that any company may charge for electricity. The markets also instituted watchdogs known as “market monitors” who are tasked with reviewing supply bids submitted by generating companies and flagging those that are deemed to be manipulative. California showed us that electricity markets (even those that are well-designed) are susceptible to manipulation, so many of these market monitors are quite aggressive, punishing firms that submit bids that are higher than marginal costs.
The final regulatory intervention is retail electric rates that are fixed and do not reflect fluctuations in the cost of generating electricity. This has partly been justified on the grounds of protecting consumers from volatile energy prices (the fact is that no energy commodity has more price volatility than does electricity), and also due to the fact that the electric meters that most customers have are still based on a century-old analog technology that does not allow the utility to bill customers based on time of use.
So, suppose that the electric system operator decided that a new power plant needed to be built “for reliability” (i.e., in order to meet the installed capacity requirement). In a market environment, all you would need is for someone to build the plant, operate it during times of peak demand when prices are relatively high, and rake in the dough. Sounds simple enough, right?
Figure 7.3 shows why, in fact, the situation is not so simple. The figure shows the average cost of producing one megawatt-hour from a new natural gas power plant (the “levelized cost of energy,” which we will meet in detail in a future lesson), as a function of how often the plant operates. The more hours the plant operates, the more productive hours over which it can spread its capital cost so the lower price it would need to charge in order to be profitable. A typical power plant that would operate only during the highest-priced hours would need to charge a price higher than the price cap in order to remain financially solvent. Furthermore, consumers buying energy from that plant would pay the fixed retail rate, not the plant’s levelized cost of energy.
Figure 7.3 basically illustrates the conundrum: someone needs to build the plant for reliability reasons. But because of price caps and fixed retail pricing, no one would ever make money operating this plant. We are stuck in a contradiction – the system operator needs to maintain its installed capacity margin, but no power generation company has any incentive to build the plants that will meet this regulatory requirement.
In electric systems that have not restructured, this isn’t so much of a problem since the regulated utility can ask its public utility commission to pass through the costs of this plant to its ratepayers. But in a deregulated generation environment, there is no guarantee of cost recovery. This conundrum is sometimes called the “missing money” problem. Capacity payments are supposed to solve this problem by providing additional revenue to power plants to keep them operating. It is important to realize that capacity payments have been incredibly controversial since they are sometimes viewed as windfall profits for power generators.
To see the relevance to VER integration, let’s take our example of the uniform price auction from the previous lesson. In that example, we have five generators and demand is 55 MWh. Generator D clears the market and the SMP is $40/MWh. Profits are:
You can see how Firm D might have a “missing money” problem since it produces electricity but earns no profits. If Firm D is the “marginal generator” too often, it will not be able to cover any fixed costs and will eventually go out of business. (We didn’t have fixed costs in the example, but most power plants do have some fixed costs of operation, like land leases or other rental payments.)
Now suppose that a 20 MW wind plant was built in this market. Wind energy has a very low marginal cost – so close to zero that we can just call it zero for this example. This new wind energy plant moves the entire dispatch curve to the right by 20 MW. If the wind is producing at full capacity and demand is 55 MWh, then Firm C becomes the market-clearing generator and the SMP falls to $30/MWh. (The 20 MW of wind with zero marginal cost has the same effect on Firms A through E as a 20 MW decline in electricity demand – why?) Try recalculating profits yourself under this scenario. You should get:
While it is nice that the wind farm’s profits are so high and that the wholesale price of power has fallen, three of the five existing generating firms are not making any profit and might consider shutting down altogether. But according to reliability rules, we need those plants to be operating in order to have enough bulk generation capacity. Because of this requirement, the wind farm has effectively instigated a “missing money” problem for some of the other generators in the system. Hence, there is some need for a capacity payment or other revenue stream, but only because of the regulatory requirements for installed capacity.
To figure out what the capacity payment needs to be for a specific generator, we can compare its revenues to its total costs. The difference represents the capacity payment needed for the power plant to break even. Consider two cases as an illustration. First, suppose that Firm C in our example above had fixed costs of $100. With the wind plant in the market, Firm C breaks even on its operating costs (the SMP is $30 per MWh and its production cost is also $30 per MWh) but covers zero of its fixed costs. So, the capacity payment needed for Firm C would be $100. As a second example, suppose that Firm A had fixed costs of $250. With the wind plant in the market, Firm A earns $200 in operating profit, but its fixed costs are $250. Its total profit would thus be -$50 and Firm A would need a capacity payment of $50 in order to break even.
Finally, suppose that Firm B had fixed costs of $100. With operating profits of $225, Firm B earns a total profit of $225 - $100 = $125. Thus, Firm B is profitable even without a capacity payment. Does that mean that Firm B is not allowed to earn a capacity payment in the electricity market? No - Firm B can compete in the capacity market and earn a capacity payment just like any other power plant. While the capacity market was set up because some power plants (like A and C in our illustrations) have revenue shortfalls, participation in the capacity market is not limited just to those plants who "need" a capacity payment to avoid losing money.
You might be wondering what happened to the 20 MW of new wind – doesn’t that count as “capacity?” The answer is that different regions have very different ways of allocating capacity credits to VERs. Typically, the system operator will let a VER count for a fraction of its capacity. So, our 20 MW wind farm in this example would count for perhaps 2 or 3 MW towards the system-wide installed capacity requirement.
In some areas with a high penetration of VERs, particularly wind, the price of electricity has started to become negative, meaning that suppliers must pay the system operator in order to keep producing electricity. This also means that consumers get paid to use more electricity, which sounds like a terrific deal. Here’s how such a thing is possible. Mount Storm Lake in West Virginia is home to both a large coal plant (one of the biggest in the state) and a large wind farm. These plants are connected to the same transmission line that winds its way towards Washington D.C. This transmission line is almost always congested, so the Mount Storm group of power plants cannot always produce at full combined capacity. During an autumn evening, when electricity demand is low, the Mount Storm coal plant is producing at full capacity and meeting electricity demand. All of a sudden, the wind picks up and the Mount Storm wind farm starts to generate a lot of power and there is excess supply at Mount Storm. What options does the system operator have? It could simply shut down the coal plant (which would be hard to do quickly since coal plants are inflexible) or shut down the wind farm (which, as you saw in Vermont, is not without its own problems). Or it could allow the price to go negative and charge either (or both) plants to continue to produce electricity. If there happened to be some electricity consumer at Mount Storm, that consumer could get paid for absorbing that excess supply locally.
In a power grid, supply and demand must be matched at every second. In order to keep the grid operating reliably, system operators need a portfolio of backup resources in case of unplanned events. These backup resources are collectively known as “ancillary services.” These are purchased by the system operator through a type of option arrangement. The system operator might have an agreement with a power plant that gives the system operator the right to start-up or shut-down that plant if a contingency arises on the power grid, or if demand increases or decreases in ways that the system operator did not predict.
A very simple example of how this might work is shown in Figures 7.5 and 7.6. Figure 7.5 illustrates a hypothetical power system with some amount of predicted or “scheduled” demand. Actual demand deviates from this scheduled demand by small amounts – sometimes higher and sometimes lower. Figure 7.6 shows how a generator providing ancillary services would change its output in response to demand fluctuations – a practice known as “load following.” Ancillary services are the primary mechanisms currently in place for system operators to accommodate unanticipated variation in VER output.
There are many different flavors of ancillary services. Please read the “Ancillary Services Primer” which provides some more detailed information. Two types of ancillary services are of particular relevance for VER integration – reserves and regulation.
“Reserves” represent backup generation that can be called upon in a certain amount of time in case of a contingency on the power grid, like the unexpected loss of a generator or transmission line (interestingly enough, it’s not clear whether or not the unanticipated loss of a VER for resource reasons, like the wind stops blowing, is considered a “contingency” in the eyes of reliability regulators). There are two types of reserves:
“Frequency regulation” or just “regulation” refers to the generation that can respond automatically to detected deviations from the frequency at which all generators in a synchronous AC system are rotating (in this US, this is 60 Hertz; some other countries use 50 Hertz). Regulation is sometimes called “automatic generation control” since the response is typically too fast for a human being to initiate.
Regulation is, at this point in time, the most relevant ancillary service for VER integration and is also one of the most difficult to understand. The frequency of the power grid needs to remain constant at 50 or 60 Hertz (depending on the country). That frequency is related to the demand-supply balance. It may be helpful to think about frequency as analogous to the water level in a bathtub, as illustrated in Figure 7.7. The balance of demand and supply is like the tub having an identically-sized faucet and drain. If the faucet is larger than the drain, the water level rises – and in a power grid if supply is larger than demand then the frequency will go up. If the drain is larger than the faucet (or if demand exceeds supply), then the system frequency declines.
Frequency regulation as an ancillary service corrects for frequency deviations by increasing or decreasing the output of specific generators, usually by small amounts, in order to effectively increase or decrease the size of the faucet relative to the drain. Response times for generators providing regulation are typically on the order of seconds, which is primarily why frequency regulation is used as a way for system operators to ride through unanticipated fluctuations in VER output.
Recall from the Ancillary Services discussion that generators are used to provide frequency regulation by effectively increasing the size of the faucet relative to the drain (if you don’t remember the bathtub analogy, go have a peek at Figure 7.6). There is no particular reason that generators need to have a monopoly on this. If we were to change the size of the drain relative to the faucet, we would accomplish the same thing, right?
The idea of balancing supply and demand on the demand side rather than solely on the supply side is not that new – electric utilities have been paying their customers to put timers on thermostats or hot water heaters for decades. Some utilities have even figured out how to charge customers more for electricity during the day than at night. But following an order from the Federal Energy Regulatory Commission in 2012 (Order 745), there has been a renewed interest in developing mechanisms to pay people and businesses not to consume electricity. “Demand Response” refers to end-use customers adjusting demand in response to price signals, or energy conservation during periods of high demand (to prevent blackouts). Many electricity systems, particularly those with active wholesale markets, have implemented wholesale demand response programs that allow customers to offer demand reduction on the same basis that generators offer supply. There are two basic flavors of demand response:
Many electricity systems, both regulated and restructured, are also experimenting with allowing demand-side participants to offer ancillary services such as frequency regulation (remember the bathtub analogy and you will see that, at least in concept, this is a perfectly sensible idea). But to date, the vast majority of demand-side participation, and over 90% of all of the profits from demand response, has been through the capacity market. Figure 7.8 illustrates the rapid growth in demand-side capacity market participation. In recent capacity auctions, the largest source of new “capacity” was actually commitments to reduce demand rather than increase supply.
The result has been to introduce volatility into capacity market pricing. Just as the introduction of VERs lowered the price in the electricity market in our previous example, so too has demand response lowered prices in the capacity market, to the point where generators have started to complain of a “missing money” problem in the capacity market as well!
The primary piece of regulation that has enabled demand response in electricity markets is known as FERC Order 745 [55], which mandated that Regional Transmission Organizations compensate demand response at the prevailing market price under most conditions. This means that if you successfully offer electricity demand reduction to the RTO, you benefit twice. First, you benefit by not having to pay for the electricity that you did not consume. Second, you benefit because the RTO pays you the prevailing market price for all of that foregone consumption.
Because of the way that Order 745 has mandated that demand response be paid, it has been very controversial, so much so that it was successfully challenged in May of 2014 before the U.S. Circuit Court in Washington, DC. You can read the entire ruling here [56], but the gist of the arguments against Order 745 were as follows:
The D.C. Circuit Court agreed with both of these arguments and overturned FERC Order 745, effectively removing economic demand response as a participant in U.S. electricity markets. In a moment that captured the attention of electric power industry geeks everywhere, the D.C. Circuit Court decision was appealed to the U.S. Supreme Court [57], which overturned [58]the Circuit Court decision and allowed demand response to be paid the same way that power plants get paid. If you happen to be a legal junkie, you might like to read the Supreme Court's decision [59], which has had implications for a lot of different smart grid technology programs run by regional power markets.
Variable Energy Resources (VERs) - specifically wind and solar - do introduce some special economic challenges to power grids and electricity markets. Since VERs have such low marginal costs (remember that fuel from the wind and sun is free) they do have the potential to depress prices in the day-ahead and real-time energy markets, even producing negative prices in some cases. This creates challenges for system operators in restructured areas in particular, since they are charged with having enough generation capacity on the grid to meet reliability requirements, but individual generation owners will stay in the market based only on business criteria - whether they earn enough money to make continued operations worthwhile. The resulting conflict between market and regulatory signals has been called the "missing money" problem, and capacity payments have been introduced, albeit controversially, as a mechanism to make up for the "missing money." Energy output from VERs can also fluctuate faster than system operators are able to adjust for, so large-scale VER integration will almost certainly increase the demand for existing ancillary services (particularly frequency regulation) and may necessitate new types of ancillary services, which could be provided by either demand-side or supply-side resources.
You have reached the end of Lesson 7! Double check the What is Due for Lesson 7? list on the first page of this lesson to make sure you have completed all of the activities listed there before you begin Lesson 8. Note: The Lesson 8 material will open Monday after we finish Lesson 7.
The links below provide an outline of the material for this lesson. Be sure to carefully read through the entire lesson before returning to Canvas to submit your assignments.
By this point in the course, we have discussed the market structures and institutions that govern price dynamics and behavior in the major energy commodity markets - crude oil and refined petroleum products; natural gas; and electricity. We are now going to shift our focus down one level from the market to the individual firm or project. The next series of lessons, starting with this one, will develop quantitative tools that will allow you to prepare an assessment of the profitability of an energy project, whether that project falls into a conventional energy category (e.g., a natural gas power plant) or a sustainable energy category (e.g., solar hot water heating). While our focus here will be on profitability as a performance metric for project evaluation, it is possible to use other performance metrics (such as life-cycle impacts; embedded energy or water). In the investment world, these alternative metrics are used far less frequently in project evaluation.
This lesson will build up to the construction of the "pro forma" financial statement for a single energy project. Along the way, we will learn a little about the unique language of accounting (and why Enron gave accounting a bad name); what "depreciation" means to a tax accountant; and how a "10K" is not just a road race.
By the end of this lesson, you should be able to:
Basic accounting concepts and the items on the pro forma income and cash flow statements will be explained using the following resources
This lesson will take us one week to complete. Please refer to the Course Calendar in Canvas for specific due dates. Specific directions for the assignment can be found within this lesson. This week's assignment asks you to prepare a simple pro forma P&L and cash flow statement for a power plant. Please pay attention to the instructions regarding submission through the Lesson 08 Drop Box in Canvas.
If you have any questions, please post them to our Questions about EME 801? discussion forum (not email), located in the Lesson 00 module in Canvas. I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
This is not a course in accounting. But I really recommend that you take one. Accounting often gets a reputation as a monotonous and boring field, and since there is so much number-crunching involved in accounting, there are times when this is true. But to really understand the drivers of how business decisions get made, you need to start with some accounting – because the process of accounting provides the basic language and building blocks for determining which companies, technologies, and projects will ultimately be viewed as successes and failures. The main goal in most of business is to make money, of course, but what exactly it means to have "made money" is surprisingly difficult to define in any straightforward fashion.
The goal in this section is to familiarize you with some of the most basic concepts in accounting. It's meant to get you comfortable with the language of ledgers, not necessarily to convert you into one of those guys with the green eyeshades [64]. And you might even understand why your taxes are so complicated (at least, if you live in the United States).
The website Investopedia has a very nice introduction to accounting concepts. [65] At this point, you should go ahead and read it, especially the first five parts (through the section on financial statements). Once you are comfortable with that material, go ahead and have a look at Alison Kirby Jones's accounting tutorial [62], which focuses nicely on financial accounting. The second part of the tutorial, which focuses on the accounting statements of income and cash flows, is most relevant to us.
What we will focus on in this lesson is the practice of financial accounting - the preparation of a synopsis of a company's financial health. The primary tools in financial accounting are a series of tables or "statements" that convey specific information about the financial position of the company. The most important of these statements are:
The readings for this lesson and some of the online discussion will focus on the company as the unit of analysis for accounting. But virtually all of the concepts that we develop can be applied to a specific project as well. In fact, the main purpose for going through these accounting concepts is so that you can apply them to the evaluation of energy projects, through a series of weekly assignments and (eventually) the final semester project.
The remainder of this section will be devoted to understanding the Balance Sheet and the Fundamental Accounting Identity. The Balance Sheet is just a tabular summary of the net financial position of a company at some point in time (note that this is a fundamental difference between the Balance Sheet and the P&L or Cash Flow Statements, which usually capture financial positions over a period of time, rather than just a snapshot). The Balance Sheet has just two columns - one which measures what the company owns (or its assets, sometimes called credits) and the other that measures what the company owes (or its liabilities, sometimes called debits).
Assets and liabilities, from an accounting standpoint, are just two sides of the same coin. When a company has possession of something that it hopes will net it a future benefit (an asset), that future benefit generates a "claimant" (liability). After all, if a company purchases equipment, supplies or labor, the money for those assets had to come from somewhere. Typical examples of claimants would include lenders (like a bank), shareholders in the company, or even the company's owners if they are the ones that put up money. Table 8.1 summarizes things that would go in the Asset and Liability sides of the Balance Sheet (note that this is just a summary of the figures on pages 3 and 4 of the Jones reading).
Assets | Liabilities |
---|---|
Cash | Accounts Payable (what the company owes its suppliers) |
Accounts Receivable (what the company is owed by customers) |
Claims by Lenders (what the company owes its creditors) |
Inventory | Owner's Equity (anything that the owner or shareholders put up to fund the company) |
Property, Plant and Equipment (PP&E basically any physical plant assets) |
One important feature of the Liabilities side of the Balance Sheet is how creditors (lenders) and suppliers are separated from equity owners (the company's employees and shareholders). Creditors and suppliers have claims to fixed sums of money from the company, equivalent to whatever they are owed. Equity owners have a claim to any "residual" income of the company after the creditors and suppliers have been paid. For this reason, equity owners are often referred to as "residual claimants."
Since every asset generates its own future liability, every entry on the Asset side of the Balance Sheet needs to have a corresponding entry on the Liability side. Put another way, Assets (the left side of the Balance Sheet) generally describe investing activities, which are undertaken by the company in order to make more money in the future (well, we hope). But the funds for those investing activities have to come from somewhere. So, the Liabilities (the right side of the Balance Sheet) describe the financial activities that are undertaken to support the investments. The Liabilities side of the balance sheet describes not only the total sum of what the company will owe to various parties in the future, but also what the distribution of those parties is (banks, shareholders, company owners, and so forth).
Owner's equity is the simplest example. If I set up a company with $10,000 of my own money, then the balance sheet on the first day of the company's operations would look like the one shown in Table 8.2:
Assets | Liabilities |
---|---|
Cash: $10,000 | Accounts payable: $0 (what the company owes its suppliers) |
Accounts Receivable : $0 (what the company is owed by customers) |
Claims by lenders: $0 (what the company owes its creditors) |
Inventory: $0 | Owner's equity: $10,000 (anything that the owner or shareholders put up to fund the company) |
Property, Plant and Equipment : $0 (PP&E basically any physical plant assets) |
Here is another example, which builds upon Table 8.2. Suppose now I issue shares of stock in my company, to the tune of another $10,000. I use $5,000 of that money to purchase a piece of office equipment. On the Assets side, I would then have $15,000 in cash and $5,000 worth of PP&E. On the Liabilities side, I would have $20,000 worth of owner's equity. This is shown in Table 8.3.
Assets | Liabilities |
---|---|
Cash: $15,000 | Accounts payable: $0 (what the company owes its suppliers) |
Accounts Receivable: $0 (what the company is owed by customers) |
Claims by Lenders: $0 (what the company owes its creditors) |
Inventory: $0 | Owner's Equity: $20,000 (anything that the owner or shareholders put up to fund the company) |
Property, Plant and Equipment: $5,000 (PP&E basically any physical plant assets) |
As one final example, suppose that instead of using the cash I raised to purchase the $5,000 worth of office equipment, I purchased the equipment half with cash and half on credit from an office supply store. This credit is a promise to pay the office supply store $2,500 at some future point. So, at that point, I would have $17,500 in cash and $5,000 worth of PP&E in the Assets column. As Liabilities, I would have the $2,500 of credit from the office supply store as Accounts Payable, plus the $20,000 in owner's equity. This situation is shown in Table 8.4.
Assets | Liabilities |
---|---|
Cash: $17,500 | Accounts payable: $2,500 (what the company owes its suppliers) |
Accounts Receivable: $0 (what the company is owed by customers) |
Claims by lenders: $0 (what the company owes its creditors) |
Inventory: $0 | Owner's equity: $20,000 (anything that the owner or shareholders put up to fund the company) |
Property, Plant and Equipment: $5,000 (PP&E basically any physical plant assets) |
You might have noticed that the Assets and Liabilities columns both add up to the same number ($10,000 in Table 8.1, or $22,500 in Table 8.4). This is no accident - the examples collectively illustrate what is known as the Fundamental Accounting Identity, which states that the sum of Liabilities to creditors and owner's equity (which, remember, includes shareholders) must equal the company's total Assets.
The term "depreciation" usually refers to the physical degradation of some capital asset, like wear and tear. My car, for example, doesn't drive as smoothly with 120,000 miles on it as it did when it had 12,000 miles. As some piece of capital gets older, you might naturally expect it to be worth less, either because it requires more maintenance or because it cannot be resold for as high a price. (Though the concept of "value" of physical plant can be a bit tricky, as we will learn in the next section.) It is often said that the value of a new car drops by 20% as soon as it is driven off the lot, even though it's basically the same vehicle that was bought for a new-car price. This illustrates the difference that can sometimes arise between physical depreciation and the depreciation in a capital asset's store of value.
Depreciation of an asset's store of value has substantial implications for the financial analysis of energy projects. You might recall from Lesson 5 that the profits of a regulated public utility are determined in large part by its total stock of non-depreciated capital. Who determines the rate at which a power plant, substation or other asset depreciates in value? Similarly, tax authorities in many countries allow companies to "write off" the depreciated value of assets when calculating their total income on which they are subject to paying tax. The term "write off" here refers to a deduction from total taxable income, rather than a deduction in the total tax bill, per se. For example, if you can claim that some capital asset has depreciated in value by $100 over the course of some year, and the tax rate is 35%, then that $100 asset depreciation will ultimately lower your tax bill by $35 (35% of $100), not by $100. This is the difference between a "tax deduction" and a "tax credit." Tax credits will appear later in the course when we discuss financial subsidies for energy projects.
The rate at which an asset is financially depreciated for tax, regulatory, or other financial purposes may be very different than the rate at which the asset actually physically depreciates in value. It is even possible that an asset could be treated as completely depreciated in the eyes of a regulator or the tax authority, yet could still be generating a lot of value for its owner. Many power plants in the U.S., for example, are several decades old - well beyond their intended 30 to 40 year life spans. These power plants are mostly considered to be depreciated assets, yet some continue to be highly profitable, selling electricity into high-priced markets.
Depreciation allowances are usually determined by the regulator, tax authority or other relevant oversight body in order to allow the owners of depreciable capital assets to recover the costs of those assets through a series of tax deductions or other gains over the course of some number of years. The idea here is to encourage investment by allowing companies to use investment vehicles to reduce their tax burden. Our discussion here will focus on depreciation allowances that are allowed by tax authorities, since those allowances are ultimately the most important for development of energy project financial statements. The U.S. Internal Revenue Service, if you happen to be interested, maintains a mind-bogglingly complex list of types of property that are eligible for different depreciation schedules and methods. Here is an "introduction" to depreciation from the IRS. [66]
A depreciation schedule lists the percentage of the original (so-called "book") value of a piece of property that can be claimed as a depreciation allowance for tax reporting purposes. Broadly, there are two types of depreciation schedules: straight-line depreciation and so-called accelerated depreciation methods.
Before we get into the mechanics of depreciation, we need to develop some notation.
Based on these definitions, we can immediately see that B(t) is just equal to the original book value (P) less all of the cumulative depreciation allowances from year 1 to year t. In mathematical terms,
Table 8.5 provides the mathematical formulas for some common depreciation methods. Note that Modified Accelerated Cost Recovery Systems (MACRS), which have become more commonplace, are not included in Table 8.5 but will be discussed below.
Depreciation Method | D(t) | B(t) |
---|---|---|
Straight Line | ||
Sum of the Year's Digits | ||
Declining Balance | ||
Note: |
The most straightforward depreciation method is straight-line depreciation. Under straight-line depreciation, the book value of an asset (less its salvage value, if any) can be depreciated evenly over some number of years. For example, if you had an asset with a book value of $1,000; no salvage value; and a ten-year depreciation horizon, you could claim $100 each year for ten years as a depreciation expense and tax deduction.
The other three depreciation methods that we will discuss here - sum of the year's digits, declining balance, and MACRS - are all forms of "accelerated depreciation." Under accelerated depreciation systems, a larger proportion of the asset's book value is allowed to be depreciated in the earlier years of its use, with smaller proportions depreciated in later years of use. This allows the asset owner to enjoy a lower tax burden earlier in the asset's life. Other things being equal, this leads to higher profits in the years immediately following investment. Accelerated depreciation can substantially affect the value of an asset to its owner; we will see in Lesson 9 just how this re-allocation of tax burden and profits across the useful life of an asset increases the asset's lifetime benefit to its owner.
The three accelerated depreciation methods that we will illustrate in this lesson are:
As a means of comparison between all of these methods, let's take a hypothetical asset with a book value of $1,000 and zero salvage value, and depreciate that asset over a ten year time horizon. Table 8.6 and Figure 8.1 show the values of B(t) during each year for each of the four methods. For declining balance, we will use 25% per year. For MACRS we are using the 10-year table in Appendix 1 of Publication 496 [67].
As an exercise, see if you can reproduce the table and the figure.
Year | Straight-Line | SYD | MACRS | Balance |
---|---|---|---|---|
0 | $1,000.00 | $1,000.00 | $1,000.00 | $1,000.00 |
1 | $900.00 | $818.18 | $900.00 | $750.00 |
2 | $800.00 | $654.55 | $720.00 | $562.50 |
3 | $700.00 | $509.09 | $576.00 | $421.88 |
4 | $600.00 | $381.82 | $460.80 | $316.41 |
5 | $500.00 | $272.73 | $368.60 | $237.30 |
6 | $400.00 | $181.82 | $294.90 | $177.98 |
7 | $300.00 | $109.09 | $229.40 | $133.48 |
8 | $200.00 | $54.55 | $163.90 | $100.11 |
9 | $100.00 | $18.18 | $98.30 | $75.08 |
10 | $ --- | $ --- | $32.70 | $ --- |
There is an old saying among economists that the value of something is what someone will pay for it. (There is also an old joke about economists that an economist is someone who knows the price of everything but the value of nothing.) You can see what passes for humor among economists. But if you think about this saying, it flies in the face of all of the accounting principles that we have learned thus far, especially in the last section on depreciation. If you took that section to heart, you would conclude that the economists are wrong and the value of something is its original "book" value minus all of the accrued depreciation.
During the era of regulated utilities (both electricity and natural gas), the value of a capital asset was defined by the remaining book value B(t), which if you think about it makes perfect sense. The utility earns a fixed rate of profit on any capital asset that it constructs, and once an asset is fully depreciated then it can no longer generate any profit for the utility. By this logic, book-valuation is completely sensible. But in the electric utility industry, in particular, as the former investor owned utilities began to sell off their generation assets, they noticed something peculiar. Generation companies were unexpectedly (well, unexpectedly to the utilities, anyway) getting offers to buy power plants at multiple times their book value. In California, in particular, the utilities believed that they had run into a windfall and were basically stealing money from the generation companies. In reality, the generation companies were the cleverer of the two. Electricity prices in California became so high that even at multiple times book value, the prices paid by generation companies for the utility power plants were mere pennies compared to the profits they raked in. (Footnote: because of market manipulation in California's electricity market, many generation companies were later forced to return some of their profits to California ratepayers. Even so, the companies still profited handsomely from the plants.)
The example of electricity deregulation illustrates that the value of an energy asset is not independent of the market in which that asset participates. A power plant owned by a regulated utility that operates in a regulated electricity industry is indeed worth, to its owner, the total sum of its non-depreciated book value. The value of such an asset will inexorably decline over time. But an energy asset that sells its output into a competitive market, with prices set by the machinations of supply and demand, will have a value that is equal to the stream of profits that it will generate to its owner in the future. This value may change from year to year; month to month; day to day; or in some cases, hour to hour.
Here is another example, to which we will return in the assignment for this lesson. The Entergy Corporation purchased the Vermont Yankee power plant in 2002 from its former utility owners. At the time, Entergy paid $180 million for the power plant. The plant had basically been fully depreciated after 30 years of operation, so its value to the utility was very low. Since Entergy paid $180 million for the plant, it must have believed that the plant would have brought in a stream of profits amounting to at least $180 million during Entergy's planned tenure of ownership. At the time that Entergy purchased the plant, electricity prices in the newly-deregulated New England market were around $40 per MWh on average. By 2006, just a few years later, prices in the New England market had increased by 50%. Despite the fact that the Vermont Yankee plant had aged since its purchase by Entergy, the value of the plant likely increased between 2002 and 2006 since the profit it made on each MWh increased. (The cost of operating a nuclear power plant is very low and hasn't changed much in the past 20+ years.) Now, fast forward to 2013. Prices for electricity in the New England market have hit decade-long lows because of cheap natural gas and decreased electricity demand, with no higher prices in sight. The Vermont Yankee power plant faced a difficult political battle to have its operating license renewed. Because its profit margins were being shaved and the costs of continued operation seemed to be rising, Entergy decided to shut down the plant in 2014. So, the value of the plant, which had been sky high just a few years earlier, had declined rapidly. No other group has yet come forward with an offer to purchase the plant and resume operations.
These stories are all examples of mark-to-market valuation. The idea behind mark-to-market valuation is simple enough - that the value of an asset that is traded in the market (or whose output is traded in the market) can change depending on market conditions. The value of the asset on any Balance Sheet should thus change along with market conditions.
The logic behind mark-to-market accounting for appropriately-traded assets is very sound, and is actually required accounting practice in many cases. Mark-to-market accounting would not be appropriate for any asset whose value is set by an authority other than the market, such as a public utility commission. Early on in the process of electricity deregulation, following California's power crisis, mark-to-market accounting got something of a bad name because of how it was used by a company called Enron. One of Enron's claims to fame (infamy?) was figuring out how to manipulate California's newly-deregulated electricity market, but what brought the company to bankruptcy in the end was not its improper trading practices but its improper accounting practices. If you want to learn more about Enron, the book (or movie) The Smartest Guys in the Room has a lot of good discussion. (Full disclosure: Your instructor is quoted in the book.)
The Accountancy Journal has a nice piece that describes what went wrong with Enron [63] and the role of mark-to-market accounting in hiding a lot of Enron's corporate losses. Enron used long-term contracting and derivatives trading (such as futures and options) extensively to make money, so had to mark those contracts to market in its periodic financial statements (i.e., every quarter or so, it had to declare the current market value of all of those contracts and other sorts of deals). Enron's abuse of mark-to-market accounting basically consisted of two related practices. First, Enron would develop opaque numbers for what an energy contract was actually worth (remember from lessons 3 and 4 that natural gas futures prices are only available through NYMEX for a period of several years into the future; so Enron would develop a value for, say, a 20 year natural gas contract that was passed off as being legitimate but was, in fact, nothing more than pure speculation). Second, Enron would record the total expected lifetime value of any given contract or project on its Balance Sheet rather than its value in that particular quarter. These practices had the effect of making Enron appear much more valuable than it, in fact, actually was. In the end, the Enron affair actually had a positive side effect of improving mark-to-market accounting through the development of rules for increasing the transparency of how long-term contracts and other durable assets were valued.
The Balance Sheet that was discussed earlier in this lesson provides a snapshot in time of the financial health of a firm or the valuation (again, at a snapshot in time) of a specific investment project. The last two financial statements - the P&L and the cash flow statement - are used in two ways, depending on whether the entity under analysis is a company or a specific project. Either way, they have roughly the same format.
At this point, please read "A Primer on Financial Statements." [68] The first big table in the article lays out the structure of the P&L statement pretty nicely, at least up until the row that is labeled "Net Income." The rows below Net Income pertain to calculating financial metrics for valuation of specific companies. The P&L and cash flow statements for U.S. companies, at least those that are publicly traded, are laid out in a mandatory quarterly filing to the Securities and Exchange Commission called the Form 10K. Some companies publish similar information in their annual reports to shareholders, but these annual reports are not subject to any sort of regulatory scrutiny, whereas 10K filings can be audited if necessary. So, the 10K is the real deal as far as determining the financial position of a company. This is not to say that all companies massage the numbers in their annual shareholder reports, only that you may find differences between the annual report and the 10K, if you look hard enough. 10K filings in the U.S. are public information, so you should be able to easily find them, as long as the company is required to file one.
Individual energy projects are often evaluated using P&L and Cash Flow statements that jointly are known as the "pro forma." Unlike the P&L and Cash Flow statements for a company, which should represent actual historical data, the pro forma represents the analyst's evaluation of the financial worthiness of a potential energy project. (It is possible to put together a historical pro forma for an individual energy project, but we'll focus on the pro forma for evaluation of potential energy projects.)
As we go through the various parts of the pro forma, it will be useful to refer to a numerical example, to keep things a little less abstract. I have posted a simple pro forma statement for a hypothetical natural gas power plant, in Microsoft Excel format. Look for the Pro Forma Example.xlsx file in the
in Canvas. Please download the spreadsheet for reference (for those who do not have access to Excel, the spreadsheet should be easily opened in Open Office or in a Google Spreadsheet). The spreadsheet has four tabs:Our hypothetical natural gas power plant has the following properties that are shown in Table 8.7 (from the Plant Properties tab). Some of these plant properties aren't relevant to us right now, but we will come back and use this hypothetical plant as an example in future lessons.
Capital Cost | $500,000 | |
Annual discount rate | 10% | |
Decision Horizon (N) | 10 | Years |
Annual Output | 4500 | MWh |
Marginal Cost | $60 | per MWh |
Variable O&M | $5 | MWh |
Fixed O&M | $10,000 | per year |
Tax Rate | 35% |
Below this table on the Plant Properties tab you will notice a set of assumed annual sales prices, in $ per MWh, for the fifteen-year operational period.
Table 8.8 shows the first few years of the P&L statement (not all fifteen; for the full P&L statement, please refer to the Excel spreadsheet in Canvas). We will refer to Table 8.8 as we go through the items on the P&L statement.
First, note that we label the construction year as "Year 0" and the operational years as "Year 1-15." That is just a numbering convention indicating that we assume that the power plant is built this year and is operated beginning the following year for fifteen subsequent years.
The first section of the P&L, on lines (1) through (5), outlines the costs and revenues of the power plant project.
Year 0 | Year 1 | Year 2 | Year 3 | ||
---|---|---|---|---|---|
(1) | Construction Cost | $500,000 | $ | $ | $ |
(2) | Annual Operating Revenue | $428,175 | $305,332 | $383,445 | |
(3) | Annual Variable Operating Cost | $292,500 | $292,500 | $292,500 | |
(4) | Annual Fixed Operating Cost | $10,000 | $10,000 | $10,000 | |
(5) | Annual Net Operating Revenue | $125,675 | $2,832 | $80,945 | |
(6) | |||||
(7) | Depreciation Expense | $50,000 | $90,000 | $72,000 | |
(8) | |||||
(9) | Taxable Net Income | $75,675 | $(87,168) | $8,945 | |
(10) | |||||
(11) | Taxes | $26,486 | $------ | $3,131 | |
(12) | |||||
(13) | Income Net of Taxes | $49,189 | $(87,168) | $5,814 |
Lines (7) and (9) incorporate the depreciation allowance. The depreciation allowance is calculated using the $500,000 book value of the plant (i.e., the construction cost) and the annual depreciation allowance percentages from the MACRS table (the Depreciation tab in the spreadsheet). Taxable net income on Line (9) is calculated as the net operating revenue or EBIDTA, less the allowable tax deduction for depreciation.
Taxes owed by the plant are shown in Line (11), equal in this example to 35% of taxable net income, as long as taxable net income is positive. In year 2, for example, you will notice that the plant has a large negative taxable net income. This is not because the plant did not make any money (it made a little bit as you could see from Line (5) in Table 8.8) but because the allowable depreciation expense is so large that for tax purposes it appears as though the plant lost money. In this case, there is no income to be taxed, and the plant would not pay any taxes that year. On the P&L statement, its income net of taxes (Line (13) in Table 8.8) would be negative.
Of course, the plant did not really lose money in year 2, because the depreciation expense is not a real expense, in the sense of representing a cash outlay by the company that owns the power plant. For the purposes of calculating tax liability, however, the depreciation allowance is treated as a real expense.
Finally, we move onto the Cash Flow statement. In this example, the Cash Flow statement is much easier to put together than the P&L statement. Table 8.9 shows the first few years of the Cash Flow statement for purposes of illustration.
Year 0 | Year 1 | Year 2 | Year 3 | ||
---|---|---|---|---|---|
(1) | Investment Activities | $(500,000) | |||
(2) | |||||
(3) | Net Income from Operating | $49,189 | $(87,168) | $5,814 | |
(4) | |||||
(5) | Depreciation Expenses | $50,000 | $90,000 | $72,000 | |
(6) | |||||
(7) | Net increase or decrease | ||||
(8) | in cash | $(500,000) | $99,189 | $2,832 | $77,814 |
The first line on the Cash Flow statement lists all investment activities in that year - these represent outlays of cash (even if the Balance Sheet for that year would indicate that the investment activities were financed through debt or owner/shareholder equity). In our example, there is only one year - the construction year - with any investment activities. Line (3) is the final line item from the P&L statement, showing the post-tax net income from operating activities. The crucial point to remember here is that this figure includes the depreciation expense, which is not a real expense in the sense of any cash outlays. So, to get a real sense of how the project's cash holdings have changed throughout the year we need to add the depreciation expenses (Line (5) in Table 8.9) back to the net income (Line (3) from Table 8.9). The resulting sum is the "Net increase or decrease in cash," and it shows the end-of-year cash holdings for the power plant project. These cash holdings are used to pay back creditors and are disbursed among equity shareholders (i.e., the project's owners).
Accounting is the language that is used to describe business transactions and the financial viability of companies and projects. This same language is used to determine the viability of any type of for-profit energy company and project, whether based on conventional or alternative energy resources or technologies. We learned about the three fundamental accounting statements – the Balance Sheet, the P&L, and the Cash Flow. The latter two of these make up the “pro forma” evaluation of a potential energy project.
You have reached the end of Lesson 8! Double check the What is Due for Lesson 8? list on the first page of this lesson to make sure you have completed all of the activities listed there before you begin Lesson 9. Note: The Lesson 9 material will open Monday after we finish Lesson 8.)
For the final project in EME 801, you are asked to work in groups of two or three to perform a financial evaluation for a real or hypothetical energy project chosen from the list of topics below. You will be expected to make some choices about the scope of your project, provide a pro forma financial analysis that reflects knowledge of project valuation techniques as well as the relevant energy commodity market for your project (e.g. the oil or petroleum product market if your project was a refinery), and to identify and analyze market, policy or regulatory factors that may affect project viability in the future. You will also be asked to peer-review one other student’s work.
In the past, some folks have asked if they can use a project topic from another course for this project assignment. My response to this is a rather firm "no" because it creates problems with the same work being used for multiple courses (which is a violation of Penn State's academic integrity policy). Your project topic should be something that interests you, and which you aren't working on already in another course context. If you are inspired to work on a specific topic because of your work or community experience that is fine - but all analysis for this project must be original work created just for this class. You may not repurpose analysis from your place of work or other context. Additionally, no company proprietary data may be used for the project. All data and information that you use must come from public-domain sources that can be freely accessed by anyone. Some relevant resources will be placed in the Final Project folder on Canvas, but groups are also encouraged to find other relevant data sources.
Each group should choose one of the following three topic areas:
1) A 100 MW solar photovoltaic power plant located in the Arizona desert. The electricity from the solar plant would be sold to a utility in Arizona under a power purchase agreement. You have the option to add battery energy storage to the solar facility.
2) A natural gas power plant located in Pennsylvania. The power plant would sell its electricity into the deregulated PJM electricity market, and would be eligible to get energy market payments as well as capacity market payments. The plant would also be subject to carbon pricing under the Regional Greenhouse Gas Initiative. The plant would have the option to install carbon capture technonlogy. If the plant installs carbon capture, it would avoid the carbon prices but would suffer a 30% "energy penalty" (meaning that electricity output with carbon capture would be 30% lower than without carbon capture).
3) A hydrogen fuel production facility. The facility could produce hydrogen via one of two processes: steam-methane reforming, which uses natural gas as a feedstock, or electrolysis, which would use solar energy to split water molecules. The facility would sell its hydrogen into a regional market that might include a mix of industrial facilities, refineries and power plants.
Unless otherwise specified, all project deliverables should be submitted to the Drop Boxes in the COURSE PROJECT module in Canvas. The project schedule is defined by the following milestones (which can also be found on the Canvas Calendar for EME 801):
The project is worth 450 points (45%) of your semester grade and will be graded on the following point scheme:
The links below provide an outline of the material for this lesson. Be sure to carefully read through the entire lesson before returning to Canvas to submit your assignments.
Most major energy projects last a long time. A really long time. The median age of a coal-fired power plant in the United States for example, is somewhere around 40 years. Many nuclear power plants have received (or are applying for) license extensions that will effectively permit them to run for 60 years or longer. Even those that aren't quite so long-lived are built with the intention that they will operate for many years. Moreover, different energy technology options have different construction and operating costs (and possibly monetized carbon costs depending on future regulations, but that will be the subject for another lesson). The finances of energy projects can be difficult to evaluate for just this reason - they often involve large immediate capital outlays, followed by a stream of revenues (or costs, if the project is uneconomical) over a long period of time. The process of "discounting" is the way that we think about future costs and revenues in terms of decisions that we are forced to make today.
This lesson will be the most mathematically-intensive of the semester. We will learn about the "net present value" as a way of measuring the benefits versus the costs of a long-lived energy project. We will also discuss several other metrics that can be used to evaluate energy projects, and how these metrics are complementary — and can sometimes cause confusion.
By the end of this lesson, you should be able to:
There are lots of good online resources for understanding net present value. Our primary external resource will be the article "Have We Caught Your Interest? [69]" This article goes deeper into the math than we will need to, but is nice and concise, and has all of the relevant information on discounting. You should skim this piece before you start in on the lesson material online. You can then return to the reading as a reference.
This lesson will take us one week to complete. Please refer to the Course Calendar for specific due dates. See specific directions for the assignment below.
If you have any questions, please post them to our Questions about EME 801? discussion forum (not email), located in the Start Here! module. I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
Suppose, hypothetically (or perhaps not), that you really like Big Macs. Now, suppose that I was going to give you a Big Mac right now. If you are hungry, then you will be happy since I am giving you something that: (a) sates your hunger; and (b) tastes yummy. And I'm giving it to you right now. No waiting.
Now, suppose that I told you that I would give you the same Big Mac, but one hour from now. Assume that I just bought the Big Mac and I'm going to let it sit in my car for the next hour. How happy are you right now about the prospect of getting this particular Big Mac in one hour? Well, maybe it will get a little bit cold, and maybe in the meantime you would eat something else to satisfy your hunger, but in one hour it will seem like pretty much the same Big Mac.
Now suppose that I buy a Big Mac right now, but I'm going to wait until tomorrow (24 hours from now) to give it to you. How happy are you right now about this Big Mac that you will get tomorrow? Probably less so - first of all, you are hungry now. Sure, you will be hungry again tomorrow, but will that Big Mac taste quite as good after sitting in my car for a day? Probably not.
Now, (really, we're almost to the point), suppose that I buy a Big Mac right now but I'm going to wait six months to give it to you. How happy are you right now about this Big Mac that will sit in my car for six months before being given to you? What will happen to that Big Mac between now and six months from now? Will it still even be recognizable as a Big Mac? I won't even mention the special sauce…
On the other hand, it's a Big Mac. Why is it any different to you whether you get it now, a day from now, or six months from now? Basically, there are three reasons:
Evidently, the same Big Mac (whether it will be fresh or had sat in my car for some unfathomable period of time) is worth less to you if you get it in the future than if you get it now. The process of measuring how fast that Big Mac loses value to you, relative to its value right now, is called discounting. The "discount rate" measures how quickly or slowly something loses value over time. A high discount rate means that something loses value quickly, relative to its value at the present time. A low discount rate means that something loses value slowly, relative to its value at the present time. A discount rate of zero means that something does not lose any value over time - it's worth just as much in a day, week, year, decade, or century as it is right now. Remember: the discount rate measures how quickly something loses value over time. It does not measure whether something is valuable or not valuable in absolute terms.
The particular method of discounting that we will cover in this lesson is known as "exponential discounting" because of the assumption that something loses value at a constant rate (percent per year) over time. An alternative discounting method that we won't cover, but you can look at in your spare time, is known as "hyperbolic discounting." Wikipedia has a nice article on hyperbolic discounting. [70]
Before we can dive in, we need some terminology:
Suppose that you were to put some amount PV in a simple investment vehicle, like a bank account, that paid back your money plus a rate of return r every year. The future value of that amount in one year (t=1) would be:
If you kept your money in the account for another year, in that second year you would earn a rate of return r on all the money that was in your account the first year:
Note that you have effectively earned interest in the second year on the interest that you earned in the first year. This phenomenon is called "compounding." Since interest is calculated once per year in this model, we call this "annual compounding."
There is a detailed section in the "Have We Caught Your Interest" reading that discusses compounding at intervals more frequent than annually, all the way up to continuous compounding. Please read this section carefully, although in this class we will use annual compounding almost exclusively.
Going back to our little bank account, more generally, in year t the future value of the investment that you make today is:
This equation is an indifference condition - it says that you would be equally happy getting some amount of money PV right now, or if you had to wait t years to get your money. The factor just measures your opportunity cost for having to wait t years.
Now, we will flip the equation on its head. Suppose that you were promised some amount of money FV(t) in year t, sometime in the future. What is that promise of future wealth worth to you today? In other words, what would you need to be paid today in order to be equally happy between getting money today and getting the amount FV(t) in the future? We can find this by manipulating the previous equation to solve for PV. Dividing both sides by , we get:
Assuming that r is greater than zero, this means that the value in the present of some amount of money that you are going to get in the future is lower than the face value of that amount of money. In other words, you would be willing to accept less money right now in exchange for not needing to wait to get it. This is sensible, if you think about it. First, people are impatient. Second, there is an opportunity cost to waiting. If I got some amount PV right now (rather than some larger amount FV(t) in the future), then I could take that amount PV and do something with it that might increase its worth t years from now.
The term is known as the "discount factor" and it measures how quickly something has declined in value between now and t periods from now.
Here are two examples that illustrate the mechanics of discounting.
Example #1: Suppose that r = 0.04 (4%), t=20. The discount factor is equal to:
Thus, PV = 0.46×FV. What this says is that the present value is less than half of the future value after 20 years.
Example #2: Now suppose that t = 100. The discount factor is equal to:
Thus, PV = 0.02×FV. What this says is that the present value is only 2% of the future value.
These two examples, although simple and without much context, illustrate two important properties of exponential discounting. First, at any positive discount rate, if you go far enough into the future, then the future is worthless, and you would never consider it when making decisions in the present. Second, the higher the discount rate, the faster the future becomes worthless relative to the present. Remember that the discount rate does not determine whether one thing will be worth more or less in the future than another thing. It only measures the value of something in the future relative to that same thing in the present (like a Big Mac six months from now versus the exact same Big Mac today).
There is almost nothing that puts students to sleep faster than the discount rate. Frankly, it's a dry topic. But the discount rate is actually front and center - in some ways, much more so than any science - in the debate over climate change and how societies should respond. The reason for the controversy is simple - if the worst impacts of climate change are going to be felt even two generations from now, then unless we use a discount rate of zero or near zero (meaning that we value the future exactly as much as the present) it is difficult to justify large and costly climate-related interventions in the present, in order to yield substantial benefits far in the future. But such a low discount rate flies in the face of what economists actually know about preferences that people seem to have for happiness now versus happiness (or wealth) in the future. If you are really interested in this topic from the perspective of climate change, Grist has a nice piece that was published recently [71]. Since our focus in this course is on commodity markets and business decisions, we'll find (in the next couple of lessons) that there is a logical way to determine the "right" discount rate for those types of decisions.
Suppose that you were an electric utility considering two potential generation investment opportunities: A natural gas fired generator with a capital cost of 600 dollars per kW and operating costs of 50 dollars per MWh, or a wind farm with a capital cost of 1,200 dollars per kW and operating costs of 5 dollars per MWh. For the purposes of this example, assume that either power plant could meet the same reliability goals that your utility had. But your regulator wants you to minimize economic costs to consumers. Which should you pick?
This is a difficult choice because it involves tradeoffs over time. The natural gas generator has a low up-front cost but a higher operating cost. The wind generator has a higher up-front cost but a very low operating cost, because fuel from the wind is free (at the margin). This tradeoff is shown visually in Figure 9.1, for a 1 MW natural gas and wind power plant — for the sake of comparison, we assume that each of the plants operates 30% of the time, so each produces 1 MW × 8,760 × 0.3 = 2,628 MWh per year.
This 30% figure is called the capacity factor, and it measures how much electricity a power plant produces in a year versus how much it could produce each year if it operated all the time at 100% capacity.
Capacity Factor = (Annual Production) ÷ (Capacity × 8,760 hours per year).
Figure 9.1 illustrates the tradeoff but doesn't say much about which decision you should make. Part of the problem is that by investing in the wind turbine, you are accepting a high cost right now in exchange for a stream of operating cost savings in the future. Is this stream of operating cost savings worth it?
One of the most frequently-used metrics to compare projects with different capital and operating cost streams is the net present value, which incorporates the present discounted value of all project costs and revenues.
The net present value (NPV) is defined by two terms: the present discounted value of costs and the present discounted value of revenues. If we let Bt be the (undiscounted) revenues (benefits) of some project during year t and we let Ct be the (undiscounted) costs of the same project during year t, then we can calculate the NPV as follows:
/p>
In the equations, T is the time horizon for the project and r is the discount rate. If the net present value of a project is greater than zero, then the project is said to break even or be "feasible" in present discounted value terms. If you are in a position where you are evaluating multiple alternatives, you would normally want to choose the alternative with the highest net present value.
Sometimes, we will be interested in the net present value of a project as of some year prior to T. We will call this the "Cumulative NPV" as of year X. Mathematically, this is defined as:
Where X is some number smaller than T.
To illustrate, let's go back to one of our examples from earlier in this lesson. Our wind power plant had a capital cost of $1.2 million and annual operations costs of $13,140. Suppose that it had annual revenues of $100,000 and faced a discount rate of 10% per year. Each year, it thus has annual profits of $86,860.
Assuming that all of the capital costs were incurred in Year 0 and the plant began operating in Year 1, the present discounted value of each of the first three years of operation would be:
PV(0) = -$1.2 million (this is just the capital cost, note that this is a negative number)
PV(1) = $86,860÷(1.1)1 = $78,963.64
PV(2) = $86,860÷(1.1)2 = $71,785,12
PV(3) = $86,860÷(1.1)3 = $65,259.20
The Cumulative NPV for the first three years would be:
Cumulative NPV(0) = PV(0) -$1.2 million
Cumulative NPV(1) = PV(0) + PV(1) = -$1.12 million
Cumulative NPV(1) = PV(0) + PV(1) + PV(2) = -$1.05 million
Cumulative NPV(1) = PV(0) + PV(1) + PV(2) + PV(3) = -$983,992
Calculating NPV is reasonably straightforward in a spreadsheet program such as Excel. There are two functions in Excel, PV and NPV, that will calculate net present value for you. The difference between the PV and NPV functions is that the PV function assumes that you have the same profits in each period (as in our example above), while the NPV function does not make this assumption.
The video below shows an example of evaluating NPV using Excel. The example in the video is our natural gas plant from the previous lesson. Registered students can access the Excel file discussed in the video in Canvas.
So in this video, I'm going to show you how to set up a very simple undiscounted cash flow analysis and calculate a net present value in Excel two different ways. So we're going to go through two different ways of calculating the net present value. One way will be basically by brute force, which will show you how to calculate the present discounted value of costs and benefits in each year. And then you add them up by brute force to get NPV.
The second way is using a built-in function in Excel called the PV function. So we're going to go through both of these. The example or the situation that we're going to use is a power plant. So we have an example of a hypothetical power plant. And it has some capital cost, so some cost to build.
And then once it's built, it has some annual costs in revenues. And so the question is, in present value terms, does this plant make any money? Does this plant pass the cost benefit test? Are the present discounted value of all of the benefits equal-- or all of the revenues, really-- equal to the present discounted value of all of the costs? Or are they bigger, or are they smaller.
So I'm going to start off by typing in some numbers here. And this spreadsheet will also be posted up on Angel if you want to take a look at it. So I'm going to assume that the capital cost is $500,000. So that's how much it costs to build the plant. I'm going to assume that the interest rate here is 10%. I'm going to assume that the relevant decision horizon here is 5 years. So that's the time period over which we're evaluating the profitability of the plant.
I'm going to assume that its annual output is 4,500 megawatt hours of electrical energy. I'm going to assume that its variable cost, so the cost to produce 1 megawatt hour is $60. And so the total annual operating cost, where I here written total variable cost is going to be equal to $60 per megawatt hour times the number of megawatt hours. And so what we get is $270,000 per year. That's how much it costs to operate this plant, assuming that it produces 4,500 megawatt hours each year.
So the sales price for each megawatt hour I assume is going to be $90 per megawatt hour. And so then we can calculate the annual revenue is equal to the sales price, $90 per megawatt hour times the number megawatt hours. And so what we get is annual revenues of $405,000 per year.
The stuff below the cost and revenue figures in the spreadsheet is basically the simple version of the cash flow balance sheet for this power plant. So you have a column for capital cost, a column for operating cost, a column for revenue, a column for the annual undiscounted cash flow, or the net benefit, a column for the discounted cash flow, and then a column here, which I have labeled as cumulative net present value, which will show the overall net present value of this power plant after each year.
And we have year 0, so the current year, all the way down through 5 years from now. So now we have to fill in these table with numbers. So the capital cost we assume is incurred only in year 0. So you only have to spend money this year to build the plant. So that's going to be equal to minus $500,000. You have to be a little careful here when you're doing these things in Excel. You have to be careful that costs are negative numbers and revenues are positive numbers. Otherwise, Excel will mess up your calculations because it doesn't know which one is which.
So this plant has a capital cost of $500,000 in year 0, and that no capital cost in years 1 through 5. So are we going to assume here that the plant is built in year 0 and operates in year 1 through 5. So in each of years 1 through 5, we're going to have an operating cost equal to minus $270,000. And that's going to be the same for each year. Revenues in each year are going to be equal to positive $405,000. and those are going to be the same for each year.
So the annual undiscounted cash flow is going to be the sum of capital cost, operating cost, and revenue for each year. So I'm going to use the sum function here in Excel. Summing over those three columns. In undiscounted terms, this plant costs the owner $500,000 in the first year, and then brings in benefits equal to $135,000 a year for five years.
So now we need to discount this. So remember that to discount some future value back to the present, we take that future value and divide by 1 plus the interest rate raised to the t-th power, where t is the year that we're talking about. So we're going to take this and divide it by 1 plus-- and I'm going to lock this cell by putting dollar signs around it-- raised to 0th power. And so this is just going to give us minus $500,000, because we're raising something to the 0th power, so she's going to be 1.
And then in this cell here, we have next year's future value of $135,000 discounted by one year. So this is $135,000 that we would enjoy in one year. And that's equivalent to $122,727.27 this year. We can do the same thing for years 2 through 5.
So what the discounted cash flow column tells us is the present discounted value of each year's costs and benefits. What we sometimes want to know is, after some number of years, how is the plant doing? What are its cumulative present discounted costs and benefits up to that year?
So to do that, we start with year 0, which is minus $500,000. And then the cumulative net present value of the power plant after the first year is equal to its net present value after 0 years, or minus $500,000. Plus whatever the present discounted value of the first year's net revenues are.
The cumulative net present value in year 2 is equal to whatever the plant was worth cumulatively at the end of year 1, which would be this $377,272.73 plus the present discounted value of its net revenues in year 2.
And if we keep going here, what we find is that at the end of year 5, in present discounted value terms, the plant is worth $11,756.21. So that's the same thing that you would get if you were to add up all of the discounted cash flows. So I'll do that here.
So if you add all of the discounted cash flows, just following the formula that was in the reading and in the lecture notes, you'll get the same thing as what you would get in year 5 if you calculated the cumulative net present value of this plant after every year.
So that's how you would use Excel to calculate the net present value, or to do what we call discounted cash flow analysis by brute force. So what we did was we figured out the costs and revenues each year in these cells over here, calculated the undiscounted cash flow, discounted appropriately back to present value terms, and then added everything up.
In Excel, there's a way to do this with a shortcut, without having to do all of these calculations manually. And it is called the Present Value or PV function. And so the way that you would call the PV function is to type =pv and then parentheses. And then the first thing that asks you for is the annual interest rate, the number of periods. And then the undiscounted cash flow each year in years 1 through t. So here in years 1 through 5. So we're going to give it that.
One of the things about using the PV function is that it does not understand year 0. So it does all of its discounting starting from year 1. And so you then have to add in year 0. And the other thing about the PV function is that it assumes that all numbers are negative. So you have to put a negative in front of the PV sign. And when you do that, you will see that you get the same answer that we did using the brute force method.
There's another function called NPV, which you can use in much the same way that you use PV, except you would use NPV when your annual undiscounted cash flows are not the same in every year. In our example here, the annual undiscounted cash flows were basically the same every year. So the way that you call-- so, here, I'll label this.
So the way that you call this is =npv, and then the discount rate. And then your stream of future earned discounted cash flows. And just like the PV function, the NPV function does not understand year 0. So you have to add in anything in year 0 separately. And you get exactly the same answer.
There are two special cases where calculating the NPV is especially simple - the "annuity" and the "perpetuity." When a project's annual cash flow is the same in each year, not unlike a fixed-rate mortgage, we refer to this type of project as having an annuity structure. In this case, because the cash flow is the same each year, the summation term in the NPV can be greatly simplified. The formula for the present value of an annuity is shown below; if you are interested you can find the derivation in the reading accompanying this lesson.
where a is the annual cash flow. To illustrate this, let's calculate the present value of the wind power plant over a three year operating time horizon (T=3), using the annuity formula. One of the tricks here is that you need to take the Year 0 cash flow (the 1.2 million dollars capital cost) outside of the formula.
, which is just what we got using the NPV formula.
If an annuity involves identical payments over a long period of time (tens of decades, for example), we can approximate the NPV of the annuity using an extremely simple formula called a perpetuity. If we take the annuity formula and let T get really really large (tending towards infinity) then the "-1" term from the numerator becomes meaningless relative to the term, and the present value becomes:
.
As a rule of thumb, if the annuity lasts longer than 30 years, you can probably approximate the present value reasonably well using the perpetuity formula.
The internal rate of return (IRR) is one of the most frequently-used metrics for assessing investment opportunities. The IRR is defined as the discount rate for which the NPV of a project is zero. The definition is simple, but the IRR is generally impossible to calculate without a computer.
If you use Excel, there is a built-in IRR function that will calculate the IRR for you, given a stream of costs and benefits over time. To see an example of the function's syntax, please have a look at the NPV Example.xlsx file posted in the Lesson 09 module in Canvas. (This is the same Excel file that was discussed in the NPV video.)
Unlike the NPV, which takes units of dollars, the IRR is given in percentage terms (% discount rate per year such that the project NPV is zero). We call this a "yield" measure of return. This can be very convenient when comparing different types of projects. In many cases the project with the largest yield (i.e., highest IRR) will be the most desirable. The IRR can also be compared to the investor's "hurdle rate," which is the lowest return that an investor is willing to accept before putting money into a project. Energy projects that will sell their output into competitive markets often need a yield higher than, say, a 15% hurdle rate over a five year period. This means that the project's return on investment must be at least 15%, and that this yield must be realized within five years. If the IRR of a prospective project is higher than the hurdle rate, the project could be considered attractive to an investor.
While the IRR is often an appropriate measure for determining whether an individual project is worthwhile, there are three cases where IRR may not be very useful or may yield misleading information.
First, if a project has some years with positive cash flow and some years with negative cash flow (an example is on the power plant pro forma from the previous lesson), then it is possible that the IRR may have multiple values. Mathematically, this arises because a high-order polynomial equation may have multiple (non-unique) roots. As a simple example, if the cash flows for a project over three years are -$10, $21 and -$11, then there would be two IRRs for this potential project: 0% (because it just breaks even) and 10%.
Second, if a project has negative total undiscounted cash flows over its lifetime, then the IRR is mathematically undefined.
Third, it is possible that the project with the highest IRR may not be the project with the largest NPV. In other words, some types of projects can see their NPV increase when the discount rate goes up, which is the opposite of what we would expect to happen. This situation occurs when projects have large cash outlays at their end-of-life. Nuclear power plants are a good example — at the end of a nuclear power plant's life (at least in most developed countries), the owner must pay a large amount to have the plant decommissioned. If the discount rate used by the plant's owner is low, this increases the contribution of that end-of-life cash outlay to the NPV. If the discount rate used by the plant's owner is high, then that end-of-life cash outlay is not very important, in present value terms.
Here is a simple example. Suppose that a hypothetical investment requires a $5 cash outlay in Year 0, then earns $5 per year in Years 1 through 5. In Year 6, the project requires a cash outlay of $22.50. As an exercise, calculate the NPV assuming a discount rate of 10% per year and 5% per year. You should get NPVs of $1.25 and -$0.14.
Let's return to our wind power and natural gas power plant example from earlier in this lesson. Suppose that both power plants were selling electricity into the same deregulated generation market and both had the same expected operational life. Which plant would be more profitable? Since both plants would be facing the same market price for the electricity that they sell, the more profitable plant would be the one that had the lower average cost per Megawatt-hour of electricity over its entire lifetime.
The Levelized Cost of Energy (LCOE) can be used to help evaluate problems like this one, and is one of the most commonly used metrics for assessing the financial viability of energy projects. It is used particularly often in situations like the one we just discussed - comparing the lifetime costs of different technologies for electric power generation. The LCOE can, however, be applied to other energy projects as well (like oil and gas wells, or refineries).
The LCOE is defined as the energy price ($ per unit of energy output) for which the Net Present Value of the investment is zero.
The LCOE is thus the average revenue per unit of energy output (so this would be $/MWh for a power plant, or $/barrel for an oil well, for example) over a project's lifetime such that the plant breaks even. The LCOE is sometimes called the Unit Technical Cost (UTC). It represents the lifetime average cost of energy for a specific project.
We will now get into the mathematics of calculating the LCOE. We will first present the most generic LCOE formula, and then we will discuss some simplifications of the formula.
LCOE is defined as the solution to the equation:
where Ct represents all capital costs incurred in year t (these may be zero except during the first few years of the project); Mt represents all operational costs incurred in year t, and Qt represents the total output of the project in year t. The term Ct + Mt represents the annual costs of the project (which may include payments on capital, fuel, labor, land leases and so forth). The term Qt represents the annual energy output of the plant.
Note that if all capital costs are incurred in year zero, then the term Ct factors out of the LCOE equation. In this case you will sometimes see the capital cost term referred to as "Total Installed Cost" (TIC) or "Overnight Cost" (OC). In this case we write the LCOE equation as:
In some other contexts (for those of you taking AE 878 through the RESS program, for example), you may see the discount rate r referred to as the "Weighted Average Cost of Capital" (WACC). We will devote an entire lesson later in the term to the relationship between the discount rate and the WACC (sneak preview: if the entity making the project investment is a for-profit entity, then the discount rate and WACC should be the same thing), and methods for calculating what the WACC should be.
We can thus solve for LCOE as:
There are a couple of ways to make this calculation easier. Often times when evaluating prospective energy projects we make two assumptions:
In this case, the Q and M terms from the LCOE equation are the same in each year, and we can write the LCOE as the sum of two terms:
If the variable cost of production (this would include fuel, labor and any variable operations/maintenance costs) don't change, then the LVC is just equal to this total variable cost per unit of output. Referencing the LCOE equations above, LVC would just be equal to M ÷ Q. (The LVC may also just be given in the problem statement, as in the examples below.)
Calculating LFC is a little bit more complicated. Assume that the project involves a discount rate r; the life of the project is T years; and the capital costs are paid in one lump sum TIC at the beginning of the project. LFC solves the equation:
which we can rewrite as:
Using some mathematics of finite sums (if you are really curious, Wikipedia has a detailed article on the "geometric series [73]," which describes the denominator of the LFC equation), since r is less than one, we can rewrite the denominator as
Thus,
and finally, we have our expression for LCOE:
Along these same lines, another (less messy) way to write the LCOE when output and variable costs are constant over time uses the "fixed charge rate" (FCR). The FCR is just the fraction of the Total Installed Cost (TIC) that must be set aside each year to retire capital costs (which includes interest on debt, return on equity and so forth - we'll discuss these in more detail in future lessons). Thus, TIC × FCR is the annuity payment (the sum of principal plus interest payments, like you would have with a home mortgage or a college loan) needed to pay off the investment's capital cost. The FCR is calculated as:
(You may see or have seen the FCR equation written with the WACC rather than the discount rate r. Remember that for our purposes, there is really no difference between the two.)
Using the fixed charge rate, the LCOE can be written as:
In this simpler version of the LCOE equation, note that the first term is just the Levelized Fixed Cost (LFC) and the second term (M/Q) is just the Levelized Variable Cost (LVC).
Here is an example: Suppose that a power plant costs $10 billion to build and has an expected life of 30 years. The variable cost of producing one MWh of electricity is $20. It will operate 24 hours a day, 360 days a year at a capacity of 1000 MW. (Note: to get output, multiply capacity and hours of operation over the plant's life). What is the levelized cost of energy if the interest rate is 5%?
Here is the answer: First, we calculate the total amount of electricity produced annually:
(360 days per year) × (24 hours per day) × 1000 MW=8.64 million MWh per year.
This is Q in our LCOE formula. LVC is equal to $20 per MWh. So, we calculate LCOE as:
As an exercise for yourself, calculate the LCOE for the natural gas power plant and the wind power plant that we laid out earlier in this lesson. As a reminder, the wind plant has a capital cost of $1.2 million and a variable cost of $5/MWh. The natural gas plant has a capital cost of $600,000 and a variable cost of $50/MWh. Each plant produces 2,628 MWh per year. Assume a 10% annual discount rate and a 20-year life for each project. You should find that the wind plant has LCOE = $58.63/MWh and the gas plant has LCOE = $76.82 per MWh.
The LCOE can be used to compare energy projects to prevailing market prices. If the market price is higher than the LCOE, then the margin per unit of output is positive (Market price - LCOE is greater than zero) and the project should be profitable. If the market price is lower than the LCOE, then the project will have negative margins and will not be profitable. There are some pitfalls to using LCOE in this way to evaluate variable renewables like wind and solar, since the LCOE is often compared to the average electricity price. If you think about it, this comparison is biased against solar and biased towards wind because solar is more likely to be producing electricity during the daytime (when prices are high) and wind is more likely to be producing electricity during the nighttime (when prices are usually low). A more consistent approach, which is just as relevant for fossil-fired power plants as for renewables, would be to compare the LCOE to the average price when you would expect the power plant to be generating electricity.
Most energy projects involve large capital outlays at the beginning of project life followed by a stream of costs and benefits during the project's years in operation. (Some types of energy projects would end their lives with large capital outlays as well, to handle decommissioning or other environmental issues.) Since waiting to enjoy future benefits from an energy project involves opportunity costs, a dollar of benefit in the future is worth less than a dollar of benefit now. These streams of future costs and benefits need to be expressed in terms of value at the time that the project decision is being considered or initiated. This process is called "discounting."
Project alternatives may have different capital and operating costs even if they ultimately produce the same product. Electricity is probably the best example of this - power plants generally exhibit a tradeoff between capital and operating cost. We developed three different but related metrics to evaluate stand-alone projects and to compare the relative economic merits of project alternatives. The net present value will tell you which project will be the most profitable in absolute present-value dollar terms. The internal rate of return can be useful in comparing percentage returns or "yields" on different projects, or for checking whether a proposed investment exceeds the hurdle rate set by an individual investor or company. The internal rate of return cannot, however, always identify the most profitable project. The levelized cost of energy will tell you the average revenue per unit of output required for a proposed investment to break even, in present value terms. Levelized costs are often compared to prevailing market prices to estimate margins, but this comparison needs to be done with care if energy projects are not expected to run around the clock.
You have reached the end of Lesson 9! Double check the What is Due for Lesson 9? list on the first page of this lesson to make sure you have completed all of the activities listed there before you begin Lesson 10. Note: The Lesson 10 material will open Monday after we finish Lesson 9.
The links below provide an outline of the material for this lesson. Be sure to carefully read through the entire lesson before returning to Canvas to submit your assignments.
In one sense, energy projects are no different than playing the roulette wheel in Las Vegas. Both are inherently risky, and you may well not have the shirt left on your back at the end. Actually, in one important sense, you should feel more confident about the roulette wheel than with energy projects – with roulette at least you know how terrible your odds are, and nothing is going to change those terrible odds. Because the financial fate of energy projects winds up so often in the hands of market and political factors, both of which can turn on a dime (just ask the coal industry following the publication of the Clean Power Plan [74], or electricity demand response companies like EnerNOC when a major aspect of its business model was overturned by the DC Circuit Court [75] in 2014 - even though that ruling was eventually overturned by the Supreme Court), it can be difficult to know whether your project has a good or bad chance of succeeding. Given the nature of energy project investing – large up-front capital costs that must be recovered over a long duration – it’s sometimes a wonder that anything gets built at all!
The basic problem here is that the future is uncertain. In this lesson, we will look at a few different ways of considering uncertainty in project evaluation. Uncertainty has lots of different sources – markets may shift during a project cycle, making commodities more or less valuable; regulations may change; or public opposition to a project may be stronger than initially anticipated, making project permitting and siting all but impossible. Our focus here will be primarily on quantitative methods for incorporating uncertainty into project analyses, rather than on the sources of uncertainty themselves.
By the end of this lesson, you should be able to:
We will draw on sections from the following readings. The readings on coal-fired power plants and wind plants are a bit out of date but are useful illustrations of the types of uncertainty that energy projects face in the real world.
Registered students can access copies of the readings in the Lesson 10 module.
This lesson will take us one week to complete. Please refer to the Course Calendar for specific due dates. See specific directions for the assignment below.
If you have any questions, please post them to our Questions about EME 801? discussion forum (not email), located in the Start Here! module. I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
Before we get into quantitative methods for considering uncertainty in energy project evaluation, have a look at the piece from the EIA on the wind production tax credit [78] as well as the following story from CNN on nuclear energy [80]. (Note that the EIA story is still relevant even though the debate on the wind production tax credit has now come and gone.)
There are certainly enough differences between the situations being faced by the wind and nuclear industries. But the commonality among them here is that both of these low-carbon power generation technologies face substantial uncertainty that will affect project investment decisions going forward. Wind energy has enjoyed a number of years of rapid growth in the U.S. and in Europe, but it is unclear how long that level of growth can be sustained. Nuclear energy appears to be in a period of decline. Whatever happened to the “energy transition” towards low-carbon electricity generation?
Figure 10.1 shows a screen shot from the PJM Renewable Dashboard [81], which illustrates all renewable energy projects within the PJM footprint that are currently in the queue for permission to interconnect to the PJM transmission grid. (There isn’t anything particularly special about PJM here, except that they have this nice graphical display of proposed and pending renewable energy projects.) The figure, which changes frequently as new projects are added and withdrawn from the queue, suggests a lot of enthusiasm for renewable electricity generation in PJM, particularly solar and wind. In fact, until coal-fired power plants started announcing their intention to retire a couple of years back, wind energy represented the largest source of planned new generation capacity of any type in the PJM system (yes, even more than natural gas, though that is no longer true as gas power plant projects position themselves to replace retiring coal plants). While there is plenty of interest, the reality is that few of these projects will actually get built, because their revenue streams depend on a number of factors that are highly uncertain.
Virtually all investments are always made under the shroud of uncertainty, but the problem is particularly acute for energy investments. The development of energy resources typically requires large up-front capital outlays, so investors need some assurances that revenue streams over time will be sufficient for cost recovery. We will actually discuss this more in Lesson 12 when we cover how specific regulatory and de-regulatory actions have affected the investment environment for power plants.
You can get something of a sneak peak of how changes in market institutions in the electricity industry have increased both the potential financial rewards from low-cost electricity generation but also uncertainty in power generation investment by reading the selection from Morgan, et al., “The U.S. Electric Power Sector and Climate Change Mitigation.” [82]
Returning to the nuclear and wind energy examples for a moment, these two technologies nicely represent two distinct but equally important sources of uncertainty facing energy projects.
The example with nuclear energy is primarily an illustration of market uncertainty. Just a few years ago, before the natural gas supply boom, electricity prices were high and nuclear power (having very high reliability and low production costs) was highly profitable. You might remember the profit calculation problems from Lesson 6 – the low-cost generator in that example was representative of a nuclear power plant. Companies that ran large fleets of nuclear power plants well – the biggest in the United States was Exelon – earned large returns for shareholders. Many existing nuclear power plants continue to make money, though the number of nuclear power plant retirements has accelerated in recent years. The biggest problem facing new nuclear plants, as well as some existing plants, is uncertainty in the market price of electricity. In deregulated electricity markets, the price of electricity is most often driven by the price of natural gas. As gas prices have plummeted with the unconventional natural gas boom, so have electricity prices. This squeezes profit margins for existing plants and makes the financial prospects for new plants look bleak. Will natural gas prices rise again in five to ten years (roughly the time line for a new nuclear build in the United States)? Who knows – current markets are not betting on it, as you could observe by looking at the futures curve for natural gas.
The example with wind energy is primarily an illustration of regulatory uncertainty. As we have discovered, wind energy is not always competitive with other forms of power generation without some form of subsidy or incentive (the same could actually be said for most low-carbon power generation technologies, including nuclear). The production tax credit (PTC) was implemented in order to provide a boost to the wind industry. The PTC basically amounts to a direct payment of a bit more than $20 per MWh for all qualifying wind energy plants. Other countries have implemented subsidy programs similar to the PTC, such as feed-in tariffs. But as the EIA article points out, since the PTC does not represent a long-term commitment by the U.S. government (it needs to be renewed periodically), when the PTC is not renewed, wind energy investments shrivel because the future availability of subsidies is uncertain. The piece by Morgan, et al., points out that the lack of a commitment to a carbon policy by the United States has been a major barrier to the development of a variety of low-carbon energy resources – not just wind and solar but also low-carbon bio-energy and technologies that can capture and store greenhouse gas emissions.
Other major sources of uncertainty for energy project include:
Uncertainty cannot be eliminated, but we can incorporate it into our project analyses in a few different ways, as we’ll discuss in the next several components of the lesson. Our use of the word “uncertainty” here rather than “risk” is purposeful. Use of the term “risk” implies some knowledge about the probabilities of uncertain events, like a coin flip or the role of some dice. But placing probabilities on specific events isn’t always straightforward or even possible. So, we use the broader term “uncertainty” to describe situations where some set of future outcomes is unknown. In particular (for our context), these outcomes are unknown at the time that some decision is to be made. Broadly, we can identify a few different types of such decision situations:
We will first discuss decision problems of the second type, then move on to the third type.
Before we dive into probability and expected monetary value (EMV), we will introduce a motivational problem from the petroleum industry. You are exploring the possibility of drilling a potential new oil field. You can either do the drilling yourself, or you can “farm out” the drilling operation to a partner. The field in which you are proposing to drill may or may not have oil – you don’t know until you drill and find out. If you drill yourself, you take on the risk if the field is not a producer, but if the field is a producer, you don’t need to share your profits with anyone. If you farm out the drilling operation, you are not exposed to any losses if the field is not a producer but if the field is a producer, the drilling company will take the lion’s share of the profits. Table 10.1 shows the net present value of the prospective oil field.
Field is Dry | Field is a Producer | |
---|---|---|
Drill Yourself | -$250,000 | $500,000 |
Farm Out | $0 | $50,000 |
Clearly, if you knew the field was a producer you would want to drill yourself (and if you knew it weren’t, then you would not want to drill at all). But you don’t know this before you make your drilling decision. What should you do?
Suppose that you had enough information on the productivity of wells drilled in a similar geology to estimate that the probability of a dry hole was 65% and the probability of a producing well was 35%. Do these probabilities make your life any easier?
These decision problems can be solved by calculating a quantity known as the “expected monetary value” (EMV) – basically a probability-weighted average of net present values of different outcomes. Formally, the EMV is defined by determining probabilities of each distinct or “mutually exclusive” outcome, determining the NPV under each of the possible outcomes, and then weighting each possible value of the NPV by its probability. In mathematical terms, if Z is some alternative; Y1, Y2, …, Yn represent a set of possible outcomes of some uncertain variable; X1, X2, …, Xn represent the NPVs associated with each of the possible outcomes; and P(Y1), P(Y2), …, P(Yn) represent the probabilities of each of the outcomes, then the EMV is defined by:
The alternative with the highest EMV would be the option chosen. A decision-maker who chooses among alternatives in this way would be called an “expected-value decision-maker.”
Some things to remember about probabilities:
Now, back to our oil field problem. We’ll describe the problem again, using the language of expected monetary value. There are two alternatives – to drill yourself or to farm-out. The uncertainty is in the outcome of the drilling process. The set of mutually exclusive and exhaustive outcomes is {dry hole, producer}. These are the only two possible outcomes regardless of whether you choose to drill yourself or farm-out. The NPVs of each alternative, under each possible outcome, are shown in Table 10.1. To decide whether to drill or farm out, you would calculate the EMV of each option as follows:
In this case, you should choose to farm out the drilling operation.
The basic idea behind EMV is fairly straightforward, assuming that you can actually determine the relevant probabilities with some precision. But the meaning of the EMV is a little bit subtle and requires some degree of care in interpretation. Let’s take a very basic situation – a coin flip. Suppose that we were to flip a coin. If it shows heads, you must pay me $1. If it shows tails, then I must pay you $1. The EMV of this game, assuming that heads and tails have equal probabilities, is $0. (See if you can figure out why, based on the EMV equation and the fact that P(heads) = 0.5 and P(tails) = 0.5.) But if you think about this for a minute, how useful is the EMV? If you play the coin-flipping game once, you will never ever have an outcome where the payoff to you is $0. The payoff will either be that you gain or lose one dollar.
If you look at the EMVs from the oil-field problem, you will see the same thing. The EMV of drilling is $12,500 but there is no turn of events under which you would wind up earning $12,500 – if you drill, it would either be that you lose $250,000 or gain $500,000. Similarly, if you farm out you will never earn exactly $17,500. You will either lose nothing (payoff of $0) or you will gain $50,000. So what does the EMV mean when it tells you that farming-out is the better option?
It’s important to remember that the EMV is a type of average. If you were to play the coin-flip game or the oil-field game a large number of times under identical circumstances, and make the same decision each time (i.e., to drill or to farm out), then over the long run you would expect to wind up with $12,500 if you choose to drill and $17,500 if you choose to farm out. While the EMV may be useful for gamblers or serial investors, using EMV needs to be done with some care for stand-alone projects in the face of uncertainty.
One potential alternative to calculating EMV when probabilities are known (or can be estimated) is to use those probabilities to describe the likelihoods of gaining or losing certain amounts of money. This is the idea behind the “value at risk” for an investment project or a portfolio of projects. The value at risk (VaR) describes the amount of money that will be gained or lost with some probability, typically worst-case situations (like describing the amount of money that would be gained or lost with a 5% probability). From a decision perspective, you might want to avoid investment opportunities with a large VaR (given some probability). Many times, VaR will have a duration associated with it, most often when calculated in reference to portfolios of financial assets (like stocks or bonds).
Wikipedia has a very descriptive entry for VaR [83]. The introductory section of the Damadoran VaR paper [79] is written in clearer English but gets into the details quickly after the first few pages.
To force the example a little bit, here is how VaR might be applied in our simple oil field problem. The field will either yield a producer (probability 35%) or a dry hole (probability 65%). Those are the only two outcomes and probabilities for this problem. If we think about the negative outcome – the dry hole – and ask how much of the value of the project might be at risk if the hole turns out to be dry, then we can calculate VaR just by using the NPVs from Table 10.1. (In this example, there is no time dimension to VaR since the project is a one-shot deal.)
If we choose the option to drill ourselves, the 65% VaR would be -$250,000. If we choose to farm-out the drilling operation, the 65% VaR would be $0. We might choose to farm out simply because the VaR technique tells us that our extreme losses would be lower if we farm-out than if we drill the well ourselves.
In the oil-field example from the previous section, the probabilities of a dry hole and a producer were known with confidence. This isn’t always true in the real world. Moreover, sometimes in real-world situations, there are so many possible outcomes that is impossible to assign a specific probability to each outcome. Suppose, just hypothetically, that oil prices could vary between $50 and $100 per barrel over the next five years. What is the probability that oil prices will average $50.03 per barrel? $50.04 per barrel? $99.30 per barrel?
In cases where determining probabilities explicitly is not possible or practical, threshold analysis and sensitivity analysis can be useful in understanding how the net present value of different alternatives may vary with some variation in key variables. Both of these techniques are useful in identifying situations under which one alternative is better than another. This can make even complex decision problems much more tractable for the decision-maker, since it reduces the problem from needing to calculate net present values for a large number of alternative outcomes to a judgment of whether one or another set of outcomes is more likely.
As we go through these techniques, we will often refer to something called a “parameter” in the decision problem. In this case, a parameter refers to a variable whose value affects the outcome of one or more alternatives – so a parameter is different from an alternative. In the oil field problem, the major parameter would be whether the field is a dry hole or whether it is a producer (so the parameter itself would be the probability of a producer versus a dry hole). The price of oil or the quantity of oil (if any) might be other important parameters for a problem such as this one.
Sensitivity Analysis
Sensitivity analysis proceeds by selecting one parameter, changing its values, and observing how these new values change the net present value (or EMV) of some alternative. If sensitivity analysis is conducted using a small range of alternative values, or if the alternative values represent different scenarios, then it is sometimes (aptly) called “scenario analysis.” There's more material on scenario analysis further down this page.
We’ll illustrate this using the oil-field problem, performing a sensitivity analysis on the EMV of the drilling option as we vary the dry-hole probability. In this case the outcome is the EMV and the parameter that we are varying is the probability of a dry hole. This analysis is pretty straightforward since we can write an expression for the EMV of drilling. We will need to use the fact that P(producer) = 1 – P(dry hole). (What rules of probability tell us that this is true?)
We can rearrange terms in the equation to get:
This equation tells us a couple of things. First, the EMV of drilling is going to decline as the probability of a dry hole increases. This makes sense, since we lose money if we drill ourselves and the field is not a producer. Second, the relationship between EMV and the dry hole probability is linear, so the EMV falls at a constant rate as the dry hole probability increases. We can also use the equation to find the dry hole probability where the EMV is equal to zero. We do this by setting EMV(drill) equal to zero and manipulating the equation as follows:
Normally, sensitivity analysis is utilized to visualize the change in net present value or EMV with the change in some parameter of interest. For the drilling option, this is shown in Figure 10.2, which plots the EMV versus the dry hole probability. Also shown in Figure 10.2 is the dry-hole probability where the EMV is equal to zero.
As an exercise for yourself, perform the same sensitivity analysis on the option to farm-out. The parameter is still the same (the probability of a dry hole) but the outcome is different (the option to farm-out versus the option to drill). Your graph should look like the one in Figure 10.3.
Threshold Analysis
Threshold analysis (also called break-point analysis) seeks to identify the value of a parameter where the best decision changes. Instead of asking what the probability of a producer versus a dry hole might be (and what are the associated EMVs of the option to drill or farm-out), a threshold analysis would ask how likely would it need to be for the field to be a producer for the expected-value decision-maker to choose the option to drill.
Threshold analyses can proceed graphically or algebraically. We will use the oil field example to illustrate both. Remember that the EMV of both the drilling and the farm-out options are functions of the dry-hole probability and of the NPVs for drilling and farming-out. Holding the NPVs constant as in Table 10.1, we can write a mathematical expression for the EMV of each option as a function of the dry-hole probability P(dry hole). We will need to use the fact that P(producer) = 1 – P(dry hole). (What rules of probability tell us that this is true?)
If we graph the EMVs together on the same set of axes (as in Figures 10.2 and 10.3), the point at which the two lines cross would be the threshold. Figure 10.4 illustrates this crossing point. Note that the scale of the axes, especially the horizontal axis, is different than in Figures 10.2 or 10.3.
Looking carefully at Figure 10.4, we can see that at a dry hole probability of around 65% or lower, the EMV of drilling is higher than the EMV of the farm-out option (this is why the drilling curve is above the farm-out curve). If the dry hole probability is above 65%, then the farm-out option has a higher EMV.
Algebraically, we can solve explicitly for the threshold value of the dry hole probability, by setting EMV(drill) equal to EMV(farm-out) and solving for the dry hole probability that makes these EMVs identical. Here we go!
Threshold analysis in particular can be a very powerful way of making difficult decisions seem more tractable. Even in the simple oil-field problem, the relevant question for the investor evaluating the oil-field decision is not to determine the exact probability of the field being a producer. The threshold analysis approach asks the potential investor whether they believe that there is more than a 65% chance of the field being dry. If so, then they should farm out drilling or not drill at all. If not, then the investor should choose to drill themselves.
Scenario Analysis
Scenarios about future outcomes or states of the world can be powerful tools for getting decision-makers to think about uncertainty. While scenario analysis is simpler than a sensitivity or a threshold analysis, scenarios can often be made up of specific values of multiple parameters. Scenario analysis is not about predicting the future or making projections. Like sensitivity and threshold analysis, it is a way to get decision-makers to consider different possible future states of the world, to identify the drivers that might lead to those states of the world, and to make plans for those possible states of the world.
In the energy world, one of the pioneers of scenario-based planning has been Shell [84], which started using scenario based planning in the 1960s, as computer-aided decision-making was emerging among large businesses. Shell adopted scenario planning when it (and other large energy companies) were surprised by the emergence of environmentalism and the OPEC cartel as unforeseen but potentially disruptive forces to their business. The Harvard Business Review [85] has a nice article about Shell's development and use of scenario planning. (The article is also available through Canvas.)
The process of scenario planning has a number of steps:
Scenario planning has its critics - one thing about scenarios, for example, is that good scenarios are based around plausibility and not probability. This can lead to different decision-makers having different views of likely versus unlikely states of the world. Regardless, scenario planning is still a widely-used tool for decision-making.
Uncertainty is a critical problem in the evaluation of energy projects, owing to the capital-intensive nature of most projects and the need for cost recovery over long time frames. Projecting the market and regulatory conditions over such long time frames is often fraught with error and, in hindsight, bad decisions. Two of the major sources of uncertainty facing energy projects are market uncertainty (market prices or price volatility are not known with certainty into the future) and regulatory uncertainty (relevant policy measures cannot be anticipated with perfect foresight). Since low-carbon energy projects in particular need policy support to achieve widespread market deployment, regulatory uncertainty in the form of a lack of climate-change policy or inconsistent application of subsidies and incentives can be especially difficult for investors in low-carbon energy technologies. Even if the future cannot be predicted perfectly, there are ways to incorporate uncertainty into project evaluation. Calculation of expected monetary values (EMV) is possible when all possible outcomes can be identified and probabilities assigned, but since the EMV represents the average payoff over a large number of trials, it may not be appropriate for single energy project evaluation. Sensitivity analysis and threshold analysis are ways of visualizing how the NPV or EMV of a project may change under different values of key parameters.
You have reached the end of Lesson 10! Double check the What is Due for Lesson 10? list on the first page of this lesson to make sure you have completed all of the activities listed there before you begin Lesson 11. Note: The Lesson 11 material will open Monday after we finish Lesson 10.
The links below provide an outline of the material for this lesson. Be sure to carefully read through the entire lesson before returning to Canvas to submit your assignments.
Steve Jobs and Bill Gates started their respective companies (Apple and Microsoft) in their garages. That makes for a nice story, but how exactly did these companies go from garage-band material to global behemoths? They had to raise money or "capital" somehow - there was only so far that Steve Jobs' own bank account (or his parents' bank account) was going to take him. Eventually both Jobs and Gates needed to seek additional capital from various sorts of investors to help their companies grow - there is, after all, an old saying that it "takes money to make money." True enough; in this lesson, we will take a bit of a closer look at the process of raising capital, from venture capital to stocks to bonds. The world of corporate finance can get very murky very fast; and, in some ways, it's more of a legal practice than a business practice. Our focus is going to be on understanding the various mechanisms that are used to finance energy projects and the implications of those funding mechanisms on overall project costs.
Where we will ultimately wind up is back at this mysterious quantity from Lesson 9 called the "discount rate." Where does that come from? When we are looking at social decisions that involve common costs and benefits, the discount rate is usually more of a matter of debate than anything else. But when a business decision is involved (and that business is a for-profit entity), then there is a rhyme and reason behind the determination of the discount rate as the "opportunity cost" of its investors. There are many different types of investors in a typical firm or project, all of whom face different opportunity costs, so we will encapsulate these in a single number called the "weighted average cost of capital" (WACC). The WACC turns out to be the correct discount rate for a company or a project.
Finally, most of the material that we will develop in this lesson is targeted towards for-profit companies making investments that are expected to earn some sort of positive return over a relevant time horizon. We won't talk much about the non-profit sector except at the very end, when we discuss how the deregulation of commodity markets has changed the investment game for many for-profit firms, but not necessarily for publicly-owned or cooperative firms. If you are interested specifically in project finance concepts for non-profit firms, the Non-profit Finance Fund [86] has some good resources available.
By the end of this lesson, you should be able to:
We will draw on sections from several readings. In particular, there are a number of good online tutorials on the weighted average cost of capital. If you want to get deeper into this subject, there is no substitute for a good textbook on corporate finance. The all-time classic is Principles of Corporate Finance by Brealy, Myers, and Allen (672 pp., McGraw Hill). This book has gone through a number of editions, so earlier editions are probably available online for relatively little cost.
Below are some freely-available resources on which we will draw:
This lesson will take us one week to complete. Please refer to the Course Calendar for specific due dates. Specific directions for the assignment below can be found within this lesson.
If you have any questions, please post them to our Questions about EME 801? discussion forum (not email), located in the Start Here! module in Canvas. I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
A lot of what we will be studying in this lesson falls under the umbrella of "corporate finance," even though our focus is actually individual energy projects, not necessarily the companies that undertake those projects. Still, there are a number of parallels and many concepts of how companies should finance their various activities are immediately relevant to the analysis of individual projects. After all, like companies as a whole, individual projects have capital, staffing, and other costs that need to be met somehow. And a company can sometimes be viewed as simply a portfolio of project activities. Similarly, an individual project can be viewed as being equivalent to a company with one single activity. (Following deregulation in the 1990s, a number of major energy projects, such as power plants, were actually set up as individual corporate entities under a larger "holding company.") A lot of the emphasis in the corporate finance field is how companies should finance their various activities. (For example, in the readings and external references you will see a lot of mention of "target" financial structures.) That isn't really our focus - we are more concerned with understanding the various options that might be available to finance project activities. The "right" financing portfolio is ultimately up to the individuals or companies making those project investment decisions.
Project financing options are numerous and sometimes labyrinthine. You may not be surprised that lawyers play an active and necessary role (sometimes the most active role) in structuring financial portfolios for a project or even an entire company. While individual finance instruments span the range of complexities, the basics are not that difficult. For an overview, let's go back to the fundamental accounting identity:
The balance sheet for any company or individual project must obey this simple equation. So, if an individual or company wants to undertake an investment project (i.e., to increase the assets in its portfolio), then it needs some way to pay for these assets. Remembering the fundamental accounting identity, if Assets increase then some combination of Liabilities and Owner Equity must increase by the same dollar amount. Herein lies the fundamental tenet of all corporate and project finance: financing activities that increase the magnitude of Assets must be undertaken through the encumbrance of more debt (which increases total Liabilities) or through the engagement of project partners with an ownership stake (which increases total Owner Equity).
Hence, all projects must be financed through some combination of "debt" (basically long-term loans by parties with no direct stake in the project other than the desire to be paid back) and "equity" (infusions of capital in exchange for an ownership stake or share in the project's revenues).
The following video introduces debt and equity in a little more detail. The article from Business Week [88], while it goes more into the specifics for small businesses than our purposes require, also has a nice overview of debt and equity concepts.
MATT ALANIS: Welcome to Alanis Business Academy. I'm Matt Alanis. And this is an introduction to debt and equity financing.
Finance is the function responsible for identifying the firm's best sources of funding, as well as how to best use those funds. These funds allow firms to meet payroll obligations, repay long-term loans, pay taxes, and purchase equipment, among other things. Although many different methods of financing exist, we classify them under two categories, debt financing and equity financing.
To address why firms have two main sources of funding, we have to take a look at the accounting equation. The accounting equation states that assets equal liabilities plus owner's equity. This equation remains constant, because firms look to debt, also known as liabilities, or investor money, also known as owner's equity, to run operation.
Now let's discuss some of the characteristics of debt financing. Debt financing is long-term borrowing provided by non-owners, meaning individuals or other firms that do not have an ownership stake in the company. Debt financing commonly takes the form of taking out loans and selling corporate bonds. For more information on bonds, select the link above to access the video "How Bonds Work." Using debt financing provides several benefits to firms. First, interest payments are tax deductible. Just like the interest on a mortgage loan is tax deductible for homeowners, firms can reduce their taxable income if they pay interest on loans. Although the deduction doesn't entirely offset the interest payments, it at least lessens the financial impact of raising money through debt financing. Another benefit to debt financing is that firms utilizing this form of financing are not required to publicly disclose of their plans as a condition of funding. This allows firms to maintain some degree of secrecy so that competitors are not made aware of their future plans. The last benefit of debt financing that we'll discuss is that it avoids what is referred to as the dilution of ownership. We'll talk more about the dilution of ownership when we discuss equity financing.
Although debt financing certainly has its advantages like all things, there are some negative sides to raising money through debt financing. The first disadvantage is that a firm that uses debt financing is committing to make fixed payments, which include interest. This decreases a firm's cash flow. Firms that rely heavily on debt financing can run into cash flow problems that can jeopardize their financial stability. The next disadvantage to debt financing is that loans may come with certain restrictions. These restrictions can include things like collateral, which require a firm to pledge an asset against the loan. If the firm defaults on payments, then the issuer can seize the asset and sell it to recover their investment. Another restriction is what's known as a covenant. Covenants are stipulations, or terms, placed on the loan that the firm must adhere to as a condition of the loan. Covenants can include restrictions on additional funding, as well as restrictions on paying dividends.
Now that we've reviewed the different characteristics of debt financing, let's discuss equity financing. Equity financing involves acquiring funds from owners, who are also known as shareholders. Equity financing commonly involves the issuance of common stock in public and secondary offerings or the use of retained earnings. For information on common stock, select the link above to access the video "Common and Preferred Stock." A benefit of using equity financing is the flexibility that it provides over debt finance. Equity financing does not come with the same collateral and covenants that can be imposed with debt financing. Another benefit to equity financing is that it does not increase a firm's risk of default like debt financing does. A firm that utilizes equity financing does not pay interest. And although many firms pay dividends to their investors, they are under no obligation to do so.
The downside to equity financing is that it produces no tax benefits and dilutes the ownership of existing shareholders. Dilution of ownership means that existing shareholders' percentage of ownership decreases as the firm decides to issue additional shares. For example, let's say that you own 50 shares of ABC Company. And there are 200 shares outstanding. This means that you hold a 25% stake in ABC Company. With such a large percentage of ownership, you certainly have the power to affect decision making. In order to raise additional funding, ABC Company decides to issue 200 additional shares. You still hold the same 50 shares in the company. But now there are 400 shares outstanding, which means you now hold a 12 and 1/2% stake in the company. Thus your ownership has been diluted due to the issuance of additional shares. A prime example of the dilution of ownership occurred in the mid-2000s when Facebook co-founder Eduardo Saverin had his ownership stake reduced by the issuance of additional shares.
This has been an introduction to debt and equity financing. For access to additional videos on finance, be sure to subscribe to Alanis Business Academy. And also remember to Like and Share this video with your friends. Thanks for watching.
Debt and equity each have costs. The cost of debt is pretty explicit - lenders typically charge interest. The cost of equity is a little more complex since it represents an "opportunity cost." If an equity investor (like a potential holder of stock) buys into Blumsack PowerGen Amalgamated, that investor is foregoing the returns that it could have earned from some other investment vehicle. The attitude of most investors, in the immortal words of Frank Zappa, is "we're only in it for the money." [91] Those foregone returns represent the opportunity cost of investing in Blumsack Amalgamated. If we weight these costs by the proportion of some project that is financed through debt and equity means, we have a number that is known as the "weighted average cost of capital" or WACC. The general equation for WACC is:
Here, the "costs" are generally in terms of interest rates or rates of return. So, a company facing a 5% annual interest rate would have a "cost of debt" equal to 5% or 0.05. We'll get into these pieces in more depth, and will explain the strange tax term in the WACC equation, after we gain more of an understanding of debt and equity, and how the costs of debt and equity might be determined.
The term "equity" in corporate or project finance jargon indicates some share of ownership in a company or project - i.e., some level of entitlement to some slice of the revenues brought in by the company or project. There are different priority levels of this entitlement - typically operating costs must be paid (including, in some cases borrowing costs) before equity investors can get their slice of the net revenues. There are also multiple priority levels of equity investors, which determines who gets paid first if profits are scarce.
The simplest, and one of the most common, forms of equity ownership is through the ownership of company stock. A share of stock is simply an ownership right to a portion of the company's profits. When public stock is initially issued by a company (called an "initial public offering"), the price paid for that stock is effectively a capital infusion for the company.
When you think about shares of stock, you may have in your mind things that are traded on the New York Stock Exchange. Not all stock is traded or issued this way, at least not initially. Often, company founders or owners will decide to sell limited amounts of stock in a company without that stock being available for the general public to purchase or being traded on an exchange like the New York Stock Exchange. A company that issues stock in this way is often referred to as being a "private" company, which means that its stock is held and traded (if it's ever traded) in the hands of individuals or institutions selected by the company. Often times, stock in a private company comes with some sort of voting right or other representation into how the company runs its operations.
A company that issues shares of stock to the general public is called a "publicly-traded" or "public" (for short) company. The term "public" in this case should not be confused with ownership by any government or the mission of the company - the term simply refers to the availability of the company's stock. The decision to "go public" is complicated and has costs as well as benefits. The obvious benefit is that issuing public stock is a relatively straightforward way to raise large amounts of capital. Owners of private stock that allow their stock to be sold in the public offering can also make substantial amounts of money if the demand for the stock among the public is high. There are, however, a couple of big down sides. First, issuing more shares of stock effectively dilutes the value of existing shares. If a company has $1 million in profits and increases the number of shares of stock from 1 million to 2 million, then the value of a share in the company drops from $1 per share to $0.50 per share. People are sometimes willing to pay large sums for company stock if they believe that profits will increase in the future. Second, the more equity investors there are (public or private), the larger the loss of control by the company's original owner.
Raising equity capital can happen through a number of different channels. Brief descriptions of a few of the major channels follow:
The "cost of equity" for a project or company represents the return that an equity investor would need in order to judge that project or company a worthwhile investment. Remember that the cost of equity is really an opportunity cost. Individual investors may have their own criteria for judging opportunity cost, and we can't get into their heads all of the time. So, how do we estimate opportunity cost for a particular project or company? The most common framework is to use a framework called the "Capital Asset Pricing Model" (CAPM). Investopedia has a nice introduction to this framework [93]that includes both the intuition and the equations. Here, we will stick mostly to the intuition.
Using the CAPM to determine the cost of equity, the equation is:
For project evaluation, it is common to use the beta and risk premium relevant to the industry in which the project is going to operate (e.g., utilities for a power plant or gasoline for a refinery). This web page has a nice table estimating the cost of equity for different industries [94].
Debt financing refers to capital infusions by entities that do not take any ownership or equity stake in the company or project. Debt financing is like a loan - in fact, bank loans are among the most common forms of debt financing for projects and companies. Most debt is "private," in that it is held in the hands of a single entity (like a bank) or group of entities, and transferring that debt to another party is time-consuming. Just as with stocks, there is "public" debt that is traded openly. Many corporations issue various types of bonds that can be traded no differently than stocks.
The big difference between debt and equity financing has to do with repayment. Equity financing is essentially a loan that is "repaid" through entitlements to a stream of future company or project profits. Debt financing involves various terms of repayment. In many cases, holders of debt have priority on repayment before holders of equity interests in a company.
The cost of debt is determined primarily by how likely or unlikely the lender is to be paid back. If a project goes into bankruptcy, for example, holders of debt may not earn back their entire investments. (One of the advantages of being a lender is that in the case of bankruptcy, lenders often have a higher priority for repayment than equity shareholders.) Rating agencies such as Moody's, S&P, and Fitch use ratings as a general indication of the riskiness of debt.
Wikipedia has nice overview tables and charts [95]of what the various ratings mean. For our purposes, the long-term column is more important than the short-term column. The Federal Reserve Bank of St. Louis [96] is a nice resource for finding returns or "yields" on corporate bonds of various grades. Often times, for each step away from the top rating (AAA, for example), investors demand a roughly 0.25% to 0.5% increase in yield in exchange for that increased risk of default, but this is not always the case. (You may see examples when you click on different links for corporate bond return data.) The list of available corpporate bond rates from the St. Louis Fed can be dizzying. If you search on the web site for the bond grade that you're looking for (like "AAA" or "BB") then you can find information more easily. You can also try using a web search engine by typing in "FRED AAA Corporate bond yield" if you were looking for the AAA bond. A couple of specific examples that you can look at to compare yields are:
Now that we've covered the basics of equity and debt financing, we can return to the Weighted Average Cost of Capital (WACC). Recall the WACC equation from the beginning of the lesson:
Evaluating the WACC for a company is different than evaluating the WACC for an individual energy project. When the WACC for a company is evaluated, we are often trying to determine (under imperfect information) what a company's costs of capital are. In this case, we would utilize as much financial data as possible in order to estimate the various terms in the WACC equation.
For an individual energy project, the various terms in the WACC equation are determined in large part by the type of investment being made, the type of market (regulated versus deregulated) in which the investment is occurring and the individual company or group of companies making the investments.
The tax rate term in the WACC equation may seem odd. Why discount the tax rate from the cost of debt financing? The reason is that from the perspective of a company, the interest on debt (i.e., the cost of debt) is tax deductible, so interest payments are offset by tax savings.
Let's go through a hypothetical example to see how this works. Suppose that Blumsack PetroServices Amalgamated wanted to invest in a new oil refinery. What is the discount rate that Blumsack PetroServices should use in evaluating the NPV of the refinery project? The answer is that the discount rate is equal to the firm's Weighted Average Cost of Capital! So we need to calculate the WACC to determine the discount rate that Blumsack PetroServices should use.
Let's assume that 15% of Blumsack PetroServices' refinery activity would be financed by debt (just to use a single number). Oil company bonds have historically had very high ratings, so we'll assume that Blumsack PetroServices has a long-term bond rating of AAA. Looking online at corporate bond yields [100], we see that a 20-year AAA corporate bond would have a yield of 2.5% (as of the time of this writing - keep in mind that these rates can and do change frequently).
If Blumsack PetroServices faced a 35% marginal tax rate, then its cost of debt financing would be 0.025 × (1-0.35) = 0.02, or 2% (I'm rounding up here - the answer to more significant digits is 1.6%).
Turning now to the cost of equity financing, we need the return on the safe asset; the market risk premium; and the beta for the petroleum industry. The yield on the 30-year treasury bond [101] was 2.34% at the time of this writing. We will assume a risk premium of 5%, and from the "Cost of Capital by Sector [94]" web page we see that the beta for the petroleum industry is between 1.30 and 1.45 (the beta is in the second column of the table; we'll use 1.45 for this example). Thus, the cost of equity for Blumsack PetroServices would be 0.0234 + (1.45 × 0.05) = 0.1, or 10% (Again, I'm rounding here - the answer to more significant digits is 9.59%).
Assuming that 15% of the refinery was financed through debt and 85% through equity, the WACC for the Blumsack PetroServices refinery project would be:
The WACC represents the discount rate that a company should use in conducting a discounted cash flow analysis of a given energy project. The reason is that the discount rate represents the opportunity cost of getting something in the future relative to getting something today. Since the WACC represents the average return for an energy project (remembering that that average is weighted across both debt and equity investors), it represents a kind of average opportunity cost for investment in a project.
Please read Section II (Problems in Managing the Restructured Industry) from Morgan, et al. [82] and have a look at the presentation from Gary Krellenstein [89]. Krellenstein's presentation in particular, while focused mostly on transmission investment, raises a number of really important issues regarding how changes in market institutions - such as decontrols on wellhead prices for natural gas, or electricity deregulation - have changed the investment environment for large energy projects. These impacts have perhaps been most evident in electricity, but are not confined to just the electricity sector.
The main idea from both readings is that deregulation has done two things simultaneously. First, it has removed the guarantee of cost recovery from major energy projects such as power plants. (Krellenstein's presentation is focused on transmission because, at the time that it was written, the U.S. federal government was considering proposals to deregulate transmission the same way that it had deregulated generation. In the end, it did not, and most transmission lines continue to enjoy cost recovery ensured by the state or federal government.) This means that energy projects need to earn sufficient revenues to cover costs, plus meet the rate of return demanded by investors. Second, commodity prices in deregulated energy markets are volatile - we have seen this in previous lessons with oil, petroleum products, natural gas, and electricity. Since the revenue stream for energy projects became more volatile, the projects themselves are viewed as being increasingly risky.
The point that Krellenstein makes on slide 13 of his presentation about "system" versus "project" financing sums it up nicely. From an investor's point of view, regulated energy markets are safe because the risks of cost recovery are effectively shifted from the project's owners to the people who pay energy bills. This may not be economically efficient, but from capital's point of view it is safe. Deregulation has shifted some of this risk back to the project's owners (i.e., the company building/operating the plant and its equity shareholders). This is why Krellenstein points out that the majority of "project financed" energy investments are heavily weighted towards debt financing. Since this debt financing is not highly rated (see Figure 11.1, from Krellenstein's presentation), it carries higher yields. The equity component of that financing also becomes more expensive since investors demand returns over a shorter time horizon, as shown in Figure 11.2 (also from Krellenstein).
You have reached the end of Lesson 11! Double check the What is Due for Lesson 11? list on the first page of this lesson to make sure you have completed all of the activities listed there before you begin Lesson 12. Note: The Lesson 12 material will open Monday after we finish Lesson 11.
The links below provide an outline of the material for this lesson. Be sure to carefully read through the entire lesson before returning to Canvas to submit your assignments.
For all of their positive aspects, one of the truths about most low-carbon energy is that in many regions it isn't cheap compared to fossil-fuel competitors. For the most part, renewables have a hard time competing directly with fuels or electricity generated from fossil resources. There are a couple of reasons for this, of course - many renewable technologies are not as mature, so costs have not yet stabilized (although the costs of wind and solar photovoltaics have declined rapidly in recent years, so the date of 'maturity' may well be nigh). The other reason is that the social costs of some harmful pollutants emitted when fossil fuels are burned for useful energy are not reflected in the price of the fuel itself or the price of the final energy output.
One solution to this problem is to place a price on pollution, which would make non-polluting alternatives look more economically attractive. The United States and many other countries actually do this with some types of pollutants - sulfur dioxide and oxides of nitrogen, for example. The European Union (and some areas of the U.S.) impose a price on carbon dioxide emissions through a system of tradeable emissions permits. Whether those prices are sufficiently high as to reflect the true social cost of pollution is subject to a lot of debate, some of it quite heated. But in concept, it is possible to level the playing field by taxing or pricing pollution.
We'll talk a bit about taxes in this lesson but will spend more time on the other alternative, which is to subsidize or incentivize energy sources that don't pollute. Many jurisdictions actually do a bit of both. The motivation for these subsidies can be as much about politics as environmental quality - in many cases renewable energy subsidies are a form of industrial policy, meant to promote growth in a specific sector (like wind or solar), rather than about reducing pollution per se. These subsidies and incentives can take several different forms, from providing payments for each unit of energy generated to lowering capital or financing costs for new investments.
Our focus here is really on how these subsidies and incentives affect the financial analysis of power plants. We won't go too deeply into the mechanisms behind each specific type of incentive program, nor will we talk too much about how effective the incentive programs might be. (Another course offered through the RESS program, EME 803, touches on these issues.) We'll also limit ourselves to those types of incentives that directly affect project financing or financial analysis. There are a very wide variety of policy options, apart from direct taxes, subsidies or incentives, that can have the effect of shifting investment decisions towards renewable energy.
By the end of this lesson, you should be able to:
The best external resource for this lesson is the Database of State Incentives for Renewable Energy (DSIRE) [102], which you have seen earlier in the class. This is a fairly comprehensive online resource for information on renewable energy incentives.
One focus of this lesson is on the markets for Renewable Energy Credits, which are tradeable certificates generated by renewable and alternative energy projects constructed under state renewable portfolio standards, which set quotas for electricity generation from alternative energy sources. The National Renewable Energy Laboratory publishes an annual summary of the REC market, which is very well done.
NREL: Status and Trends in the US Voluntary Green Power Market [103].
Section III of the following reading (available for download in the Lesson 12 module) contains a reasonably good description of how renewable portfolio standards work in a number of different states:
This lesson will take us one week to complete. Please refer to the Course Calendar for specific due dates. See specific directions for the assignment below.
If you have any questions, please post them to our Questions about EME 801? discussion forum (not email), located in the Start Here! module in Canvas. I will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
Before you get too deeply into this lesson, have a look at the Glossary page from the DSIRE website [104]. Focus on the first section of the page, which defines a number of types of financial incentives. The definitions here aren't too in-depth, but reviewing them will get you familiar with the language of different types of incentives. If you are interested, the section of the page on "Rules, Regulations and Policies" is also worth a look to give you some appreciation for the truly dizzying array of ways that governments at various levels (state, federal, and local) are trying to encourage the use of renewable energy resources, particularly for the production of electricity.
In general, DSIRE is a very good resource for learning about renewable energy and energy-efficiency incentives in the United States. The following video provides a quick tour of some of the most useful information on the DSIRE website. (Note: as of this writing the DSIRE website was undergoing some major revisions, but the structure of the web site itself does not appear to have changed that much. It is worth some time wandering around that website to see what information is available.)
PRESENTER: The Database of State Incentives for Renewables & Efficiency is one of the most useful online resources for information about incentives and subsidies available to renewable energy and energy efficiency in the United States. And it is limited to the US. But it's probably one of the best centralized locations for this information on the web. And so, the website is www.dsireusa.org [105]. Or if you just go to Google and type in DSIRE, you'll probably find it. So, when you go to the Home page, the first thing that you will see is this map of the United States. And each of the states and territories within the US is clickable to get more information about incentives available in that state. So, for example, I'm going to click on my home state of Pennsylvania. And it brings up a page with a very long list of different incentives available for renewable energy projects and energy efficiency projects in the state of Pennsylvania. all of these different incentive programs are run by either the state or some area within the state, like a county or an electric utility or a town, or something like that. And so, at the very top of the page, you can find information about all sorts of financial incentives. And then, if you scroll down to the bottom of the page, you can find information on state policies that are relevant to renewable energy or energy-efficient development. And so policies don't include things that would directly affect the financials of a project, like a rebate or a tax credit, or something like that. And so, you can find this stuff for just about any state in the US. DSIRE also has a nice list of federal incentives. And there's a special page for federal incentives. And if you look for the little American flag icon, just about anywhere on the DSIRE's website, it'll bring you to the page for federal incentives. And again, the page is divided into a section for financial incentives. So, these would be things like tax credits and loan guarantees. And then, relevant to federal policies for the renewable energy and energy efficiency sectors. The last thing that I wanted to point out, which is really useful on the DSIRE website is this page of maps. So, if on the left hand side on the menu bar, you'll see a page that says summary maps. And they have a whole bunch of different maps available that give you information on different things. So, one of the most commonly used is the map showing which states in the US have renewable portfolio standards. And so, they have these in PowerPoint and PDF formats. And I'll click on the PDF one-- show you what it looks like. And here it comes up. And it highlights all of the states that have enacted some sort of renewable portfolio standard. And it gives you some information about whether certain technologies are eligible under the standard. So, if you see a little sun icon, for example, that means that there's a special solar power quota within that state's RPS. And so what you're looking at here is the map that is basically Figure 1 from the lesson.
We'll categorize subsidies and incentives into a few broad categories:
The Renewable Portfolio Standard (RPS, sometimes called Clean Energy Standards) also deserves some mention, since it has been a popular mechanism to support renewable energy development in the United States in particular. Please have a look at Section III of "The Cost of the Alternative Energy Portfolio Standard in Pennsylvania" (reading on Canvas), which describes the RPS in one particular U.S. state, Pennsylvania. Pennsylvania's RPS is typical of how many such portfolio standards work. The reading also provides an overview of RPS policies in some other states. (You might notice that Pennsylvania qualifies some fossil fuels as "alternative energy sources" under its RPS, which demonstrates that RPS policies are not always equivalent to low-carbon policies.)
The RPS is basically a quota system for renewable energy and has been applied mostly to electricity. More than half of U.S. states have adopted some form of RPS, as shown in Figure 12.1, taken from the DSIRE website (note: look on the DSIRE web site for the most up to date version of this picture, as state programs change structure rapidly). Some countries have adopted policies for blending petroleum-based transportation fuels with biomass-based fuels (such as the Renewable Fuels Standard in the U.S., which you can read about through the U.S. Environmental Protection Agency [106], but our discussion here will focus on portfolio standards for alternative electricity generation technologies. Rather than subsidizing renewables through financial mechanisms as we have already discussed, RPS sets quantity targets for market penetration by designated alternative electricity generation resources within some geographic territory, like a state or sometimes an entire country. Typical technologies targeted under RPS include wind, solar, biomass, and so forth. The way that RPS systems typically work in practice is that electric utilities can either build the required amount of alternative power generation technologies themselves, or they can contract with a separate company to make those investments. Companies that build renewable energy projects can register those projects in a jurisdiction with the RPS to generate tradeable renewable energy credits (RECs). The RECs can then be sold to electric utilities who can use those credits to meet their renewable energy targets.
You can find more [107] information on REC prices through the NREL REC report assigned as part of this week's reading [103]. The data in the report makes a distinction between "voluntary" REC prices and "compliance" prices. Some states or areas have RPS targets that are not mandated by law, while in other states utilities face penalties if they do not comply with meeting their RPS targets.
Incentives and subsidies, if structured correctly, should act to make the technologies that qualify for those subsidies more competitive (compared to technologies that do not qualify). One mechanism that we can use to measure how this works, and whether the subsidy or incentive is likely to be effective, is by looking at how the incentive measure affects the levelized cost of energy (LCOE) for different projects.
Incentives and subsidies can affect the LCOE in one of two basic ways: they can reduce the LCOE directly through tax credits or feed-in tariff type structures, or they can reduce the WACC faced by the project developer (i.e., through loan guarantees or low/zero-interest loans).
The production tax credit (PTC) is one of the most well-known incentive programs for renewable energy in the United States. The structure of the PTC has changed over time, but it currently provides a tax credit equal to $22 per MWh for each MWh of electricity generated by a qualifying wind facility. The PTC does not actually extend over the entire life of the wind facility but, for the purposes of this example, we'll ignore that detail.
Suppose that we had a single 1 MW wind turbine that cost $1,200 per kW (or $1.2 million total) to construct. The wind turbine produces 3,000 MWh of electricity each year and the developer faces a WACC of 15% per year. The operations cost for the wind turbine is $5 per MWh. The lifetime of the project is assumed to be ten years. Remember our formula for the LCOE:
Here the LVC is $5/MWh. Looking back to the formula from Lesson 9, verify for yourself that the LFC for our hypothetical wind turbine is $79.70 per MWh, so the LCOE is $79.70 + $5 = $84.70 per MWh.
Recall that the way the production tax credit works is that it acts like a rebate to the owner of the project receiving the credit, for each MWh of electricity generated (note that not all tax credits work like this - some tax credits are based on capacity or on the amount invested). That rebate is functionally like a discount on the LCOE. So, to incorporate the impact of the PTC, we just subtract it from the LCOE. Thus, the impact of the PTC on our hypothetical wind turbine is:
Recall from the introduction to the lesson that there is some equivalence, in terms of encouraging low-carbon energy resources, between providing subsidies or incentives for those resources and imposing a tax or a price on emissions from polluting resources. You can see how this might work using the LCOE equation. If we were to take a hypothetical 1 MW coal plant with a capital cost of $2,000 and a marginal operations cost of $20 per MWh, assuming that the coal plant produced 7,000 MWh each year under the same financial terms (WACC and time horizon) as the wind plant, you would get the LCOE for that power plant to be $76.93 per MWh (try it yourself). Without any taxes, subsidies or incentives the coal plant would look cheaper than the wind plant. But with the PTC, the LCOE for the wind plant would fall below that of the coal plant.
Now, let's see what happens if we were to impose a carbon tax of $10 per MWh on the coal plant (one MWh of coal-fired electricity has about one tonne of embedded CO2, so a tax of $10 per MWh is roughly equivalent to a carbon tax of $10 per tonne of CO2). The carbon tax is just a variable cost of operation, so it increases the LVC for the coal plant from $20 per MWh to $30 per MWh. The LCOE rises by $10 per MWH to $86.93 per MWh. With the carbon tax, the coal plant now looks to be more expensive than the wind plant without the PTC. (The wind plant looks a lot better if it happened to get the PTC at the same time that the coal plant was being taxed for its carbon emissions.)
Next, we'll take a look at the impact that a loan guarantee or low-interest loan might have on the LCOE for our hypothetical wind project. Suppose that we got to our 15% WACC for the wind plant through the following calculation: the wind plant is financed 50% by debt and equity, with costs of 15% and 20%, respectively. If the tax rate is 35%, then we would get a WACC of approximately 15% (try it yourself).
Now, suppose that the wind project is able to obtain a loan guarantee that lowers its cost of debt to 3% (assume for the purposes of this example that the share of debt and equity financing stays constant). This would lower the WACC to 11.2% and would lower the LCOE to $73.49 per MWh (again, you should try these calculations yourself). In this case, comparing the impact of the loan guarantee to the impact of the PTC, we can see that the PTC is more advantageous (i.e., it yields a greater reduction in LCOE).
In this section, we'll take our hypothetical wind project and look more closely at how subsidies and incentives can be incorporated into the pro forma financial statements. We'll focus on the PTC for this example. The project parameters for the wind turbine will remain the same, except that we'll extend the operational time horizon to 15 years from 10 years and assume fixed operations and maintenance costs of $10,000 per year. We'll also assume a sales price of $60 per MWh over the entire life of the wind project. The project parameters are shown in Table 12.1.
Capital Cost | 1,200,000 | |
---|---|---|
Annual discount rate | 15 | percent |
Decision Horizon (N) | 10 | Years |
Annual Output | 3,000 | MWh |
Marginal Cost | $0 | per MWh |
Variable O&M | $5 | per MWh |
Fixed O&M | $10,000 | per year |
Tax Rate | 35 | percent |
Sales Price | $60 | per MWh |
The analysis that we are describing here is included in an Excel file posted in the Lesson 12 module in Canvas, called "Wind Pro Forma.xlsx (no PTC)." The tables below will show some excerpts from the full P&L and Cash Flow tables.
First, we'll take a look at the pro forma without the production tax credit. Tables 12.2 and 12.3 show excerpts from the P&L and Cash Flow statements (Years 0 through 3 for each statement). You could also try to reproduce these yourself as an exercise - you can check your work using the Excel file that is in Canvas.
Year 0 |
Year 1 |
Year 2 |
Year 3 |
|
---|---|---|---|---|
Construction Cost | $1,200,000 | 0 | 0 | 0 |
Annual Operating Revenue | $0 | $180,000 | $180,000 | $180,000 |
Annual Variable Operating Cost | $0 | $15,000 | $15,000 | $15,000 |
Annual Fixed Operating Cost | $10,000 | $10,000 | $10,000 | |
Annual Net Operating Revenue | $155,000 | $155,000 | $155,000 | |
Depreciation Expense | $120,000 | $216,000 | $172,800 | |
Taxable Net Income | $35,000 | $(61,000) | $(17,800) | |
Taxes | $12,250 | $ - | $ - | |
Income Net of Taxes | $22,750 | $(61,000) | $(17,800) |
Year 0 |
Year 1 |
Year 2 |
Year 3 |
||
---|---|---|---|---|---|
(1) | Investment Activities | $(1,200,000) | |||
(2) | |||||
(3) | Net Income from Operating | $22,750 | $(61,000) | $(17,800) | |
(4) | |||||
(5) | Depreciation Expenses | $120,000 | $216,000 | $172,800 | |
(6) | |||||
(7) | Net increase or decrease | ||||
(8) | in cash | $(1,200,000) | $142,750 | $155,000 | $155,000 |
Based on the cash flow statement, we can calculate the net present value and IRR for this project. If you make the calculation yourself or look at the Excel sheet, you will find that the project has a 15-year NPV that is negative - over that time period the project loses $400,440 in present discounted value terms. The IRR is calculated to be 7%.
Here the IRR is itself useful information - it tells you how low the WACC would need to be in order for the project to break even. You can thus use the IRR in evaluating the usefulness of loan guarantees or low-interest loans in making renewable energy projects profitable. In this case, if the wind project has a 50% debt and 50% equity financing structure, then even if the cost of debt were driven down to 0%, the project would still face a WACC of around 10%. Since this is higher than the IRR, we can conclude that unless the mix of debt and equity financing can be adjusted (more debt, less equity) then a zero-interest loan will not be sufficient to make our wind project profitable.
Now, we'll incorporate the production tax credit for wind into our analysis. This section of the discussion will reference a different Excel file, "Wind Pro Forma.xlsx (with PTC)," which is also posted in the Lesson 12 module in Canvas. Here we'll simply assume that our project is eligible for the PTC, and that the PTC is set at $23 per MWh.
To incorporate the PTC into the pro forma, we need to add an extra line to the P&L statement, which calculates the amount of the tax credit. Since the tax credit is calculated as a direct deduction from taxes paid, it is possible (and you'll see it happen in this example) that the Taxes line item in the P&L actually turns out to be negative. This indicates that the wind project receives a check from the IRS each year, because of the production tax credit. (The depreciation allowances also contribute here.) Table 12.4 shows Years 0 through 4 of the P&L statement with the PTC included. Please have a look at the Excel file in Canvas as well - you will notice that there is a line item for the tax credit only for years 1 through 10 (after which the project is no longer eligible for the PTC).
Year 0 |
Year 1 |
Year 2 |
Year 3 |
|
---|---|---|---|---|
Construction Cost | $1,200,000 | 0 | 0 | 0 |
Annual Operating Revenue | $180,000 | $180,000 | $180,000 | |
Annual Variable Operating Cost | $15,000 | $15,000 | $15,000 | |
Annual Fixed Operating Cost | $10,000 | $10,000 | $10,000 | |
Annual Net Operating Revenue | $155,000 | $155,000 | $155,000 | |
Depreciation Expense | $120,000 | $216,000 | $172,800 | |
Taxable Net Income | $35,000 | $(61,000) | $(17,800) | |
Tax Credit | $69,000 | $69,000 | $69,000 | |
Taxes | $(56,750) | $(69,000) | $(69,000) | |
Income Net of Taxes | $91,750 | $8,000 | $51,200 |
Table 12.5 shows the cash flow statement for Years 0 through 3. There is no real adjustment needed for the cash flow statement, since the cash flow statement starts with the "Income Net of Taxes" line item from the P&L.
Year 0 |
Year 1 |
Year 2 |
Year 3 |
||
---|---|---|---|---|---|
(1) | Investment Activities | $(1,200,000) | |||
(2) | |||||
(3) | Net Income from Operating | $91,750 | $8,000 | $51,200 | |
(4) | |||||
(5) | Depreciation Expenses | $120,000 | $216,000 | $172,800 | |
(6) | |||||
(7) | Net increase or decrease | ||||
(8) | in cash | $(1,200,000) | $211,750 | $224,000 | $224,000 |
We can again go and calculate the net present value and IRR of the plant. The PTC in this case does not make the NPV positive - the project loses $54,145 in present discounted value terms - but the PTC does increase the NPV by close to $350,000 over the life of the plant. It also raises the IRR to 14%. In this case, if the PTC could be coupled with RECs for the project (even if those RECs were not worth very much individually), then the NPV would likely increase to positive territory.
Most renewable energy projects are currently at a cost disadvantage to conventional energy projects for two reasons. First, many renewable energy technologies are not as mature as conventional technologies. Second, the social costs of pollution associated with fossil-fuel usage is not always fully incorporated into the prices charged in the marketplace. If the avoidance of those social costs could be monetized, then perhaps renewables would be more competitive with conventional energy resources. There are two basic ways to incorporate those social costs. First, polluting energy resources could be taxed (or a price put on their polluting emissions). Second, non-polluting energy resources could be given incentives or subsidies in some way, which lowers their costs compared to conventional energy resources. In this lesson, we discussed four basic mechanisms for providing subsidies and incentives. Tax credits and feed-in tariffs act as direct subsidies for each unit of energy produced. Rebates and grants can offset capital costs of new investments. Loan guarantees or low/zero-interest loans can substantially lower a project's WACC, increasing the present discounted value. Renewable portfolio standards or clean energy standards offer quota-based mechanisms for drawing alternative energy resources into the marketplace.
You have reached the end of Lesson 12! Double check the What is Due for Lesson 12? list on the first page of this lesson and the Final Project page to make sure you have completed all of the activities listed there before we conclude the course.
Links
[1] https://www.npr.org/sections/money/2016/08/26/491342091/planet-money-buys-oil#:~:text=Planet%20Money%20Buys%20Oil%20%3A%20Planet%20Money%20%3A%20NPR&text=Planet%20Money%20Buys%20Oil%20%3A%20Planet%20Money%20We%20bought%20100%20barrels,and%20into%20someone's%20gas%20tank.
[2] http://crudemarketing.chevron.com/posted_pricing_daily.asp
[3] http://www.economist.com/news/finance-and-economics/21688446-why-oil-price-has-plunged-20-new-40
[4] http://www.eia.gov/todayinenergy/detail.cfm?id=18571
[5] https://nature.berkeley.edu/er100/readings/Farrell_2006_Risks.pdf
[6] http://www.eia.gov
[7] https://rbnenergy.com/one-way-out-yesterdays-crude-price-meltdown-super-futures-contract-expiration-and-crude-storage
[8] https://rbnenergy.com/uncharted-waters-cramers-mad-money-recap-april-20
[9] http://image.guardian.co.uk/sys-files/Guardian/documents/2001/08/14/resources-biodiversity.pdf
[10] https://www.youtube.com/watch?v=NwzaxUF0k18
[11] http://www.eia.gov/todayinenergy/detail.cfm?id=4890
[12] https://www.eia.gov
[13] https://www.theodora.com/pipelines/united_states_pipelines.html
[14] http://www.theodora.com/pipelines/united_states_pipelines.html
[15] http://www.eia.gov/dnav/pet/pet_pri_spt_s1_d.htm
[16] http://www.eia.gov/petroleum/weekly/archive/2015/150107/includes/analysis_print.cfm
[17] http://www.eia.gov/todayinenergy/detail.cfm?id=19591
[18] http://www.eia.gov/energyexplained/index.cfm?page=natural_gas_home
[19] https://www.ferc.gov/industries-data/natural-gas/overview/lng
[20] http://www.eia.gov/energy_in_brief/article/shale_in_the_united_states.cfm
[21] http://spectrum.ieee.org/energy/fossil-fuels/its-too-soon-to-judge-shale-gas
[22] http://www.economist.com/node/3136036
[23] http://energy.gov/sites/prod/files/2013/04/f0/fe_eia_lng.pdf
[24] http://www.eia.gov/analysis/studies/worldshalegas/
[25] http://www.economist.com/news/business/21571171-extracting-europes-shale-gas-and-oil-will-be-slow-and-difficult-business-frack-future
[26] https://www.eia.gov/analysis/studies/worldshalegas/
[27] http://warrington.ufl.edu/centers/purc/purcdocs/papers/0528_Jamison_Rate_of_Return.pdf
[28] https://www.raponline.org/knowledge-center/electricity-regulation-in-the-us-a-guide-2/
[29] http://www.eia.gov/energyexplained/index.cfm?page=electricity_in_the_united_states
[30] http://www.npr.org/2009/04/24/110997398/visualizing-the-u-s-electric-grid
[31] http://warrington.ufl.edu/centers/purc/purcdocs/papers/0528_jamison_rate_of_return.pdf
[32] https://www.e-education.psu.edu/eme801/sites/www.e-education.psu.edu.eme801/files/L5_The_Mechanics_of_Rate_of_Return_Regulation.pdf
[33] http://economics.mit.edu/files/2093
[34] http://www.eba-net.org/assets/1/6/10-147-184.pdf
[35] http://www.eia.gov/electricity/policies/restructuring/
[36] https://www.e-education.psu.edu/eme801/sites/www.e-education.psu.edu.eme801/files/NETL_primers.zip
[37] http://www.pjm.com/markets-and-operations/interregional-map.aspx
[38] http://www.pjm.com
[39] https://www.misoenergy.org/markets-and-operations/real-time--market-data/real-time-displays/
[40] http://www.misoenergy.org
[41] http://www.cmu.edu/epp/policy-briefs/briefs/Managing-variable-energy-resources.pdf
[42] http://www.nytimes.com/2013/08/15/business/energy-environment/intermittent-nature-of-green-power-is-challenge-for-utilities.html?pagewanted=all&_r=0
[43] https://www-proquest-com.ezaccess.libraries.psu.edu/docview/2214856470?accountid=13158&pq-origsite=summon
[44] https://www.nytimes.com/2017/12/25/business/energy-environment/germany-electricity-negative-prices.html
[45] https://www-proquest-com.ezaccess.libraries.psu.edu/docview/1979936533?pq-origsite=summon&accountid=13158
[46] https://www.cmu.edu/ceic/assets/docs/publications/phd-dissertations/2012/eric-hittinger-phd-thesis-2012.pdf
[47] http://www.netl.doe.gov/energy-analyses/refshelf/PubDetails.aspx?Action=View&PubId=473
[48] https://rmi.org/wp-content/uploads/2017/05/RMI_Document_Repository_Public-Reprts_RMI_GridDefection-4pager_2014-06.pdf
[49] https://www.cmu.edu/epp/policy-briefs/briefs/Managing-variable-energy-resources.pdf
[50] https://www.misoenergy.org/
[51] http://www.ercot.com/content/cdr/contours/rtmLmpHg.html
[52] https://www.rmi.org/wp-content/uploads/2017/04/RMIGridDefectionFull_2014-05-1-1.pdf
[53] https://www.youtube.com/watch?v=cdF-CsxqDko
[54] https://www.nerc.com/AboutNERC/Resource%20Documents/NERCHistoryBook.pdf
[55] https://ferc.gov/sites/default/files/2020-06/Order-745.pdf
[56] http://www.cadc.uscourts.gov/internet/opinions.nsf/DE531DBFA7DE1ABE85257CE1004F4C53/%24file/11-1486-1494281.pdf
[57] https://theconversation.com/will-the-supreme-court-kill-the-smart-grid-48725
[58] https://theconversation.com/the-supreme-court-saves-the-smart-grid-but-more-battles-loom-53845
[59] https://www.supremecourt.gov/opinions/15pdf/14-840_diff_pok0.pdf
[60] http://www.investopedia.com/university/accounting
[61] http://pages.stern.nyu.edu/~adamodar/New_Home_Page/AccPrimer/accstate.htm
[62] http://questromapps.bu.edu/gpo/admitted/documents/FinancialAccountingPrimer_Version10PartsIandII.pdf
[63] http://www.journalofaccountancy.com/Issues/2002/Apr/TheRiseAndFallOfEnron.htm
[64] http://en.wikipedia.org/wiki/Green_eyeshade
[65] http://www.investopedia.com/university/accounting/accounting2.asp
[66] http://www.irs.gov/publications/p946/index.html
[67] http://www.irs.gov/publications/p946/ar02.html
[68] http://pages.stern.nyu.edu/%7Eadamodar/New_Home_Page/AccPrimer/accstate.htm
[69] http://plus.maths.org/content/have-we-caught-your-interest
[70] http://en.wikipedia.org/wiki/Hyperbolic_discounting
[71] http://grist.org/article/discount-rates-a-boring-thing-you-should-know-about-with-otters/
[72] https://www.youtube.com/channel/UCU1QB1a5XJa_nTHD2lzr7Ew
[73] http://en.wikipedia.org/wiki/Geometric_series
[74] https://www.federalregister.gov/articles/2015/10/23/2015-22842/carbon-pollution-emission-guidelines-for-existing-stationary-sources-electric-utility-generating
[75] https://www.cadc.uscourts.gov/internet/opinions.nsf/DE531DBFA7DE1ABE85257CE1004F4C53/$file/11-1486-1494281.pdf
[76] https://www.c2es.org/publications/us-electric-power-sector-and-climate-change-mitigation
[77] https://grist.files.wordpress.com/2008/02/dontgetburned08.pdf
[78] http://www.eia.gov/todayinenergy/detail.cfm?id=8870
[79] http://people.stern.nyu.edu/adamodar/pdfiles/papers/VAR.pdf
[80] http://www.cnn.com/2013/11/06/us/atomic-economics/
[81] https://mapservices.pjm.com/renewables/
[82] https://www.c2es.org/site/assets/uploads/2005/06/us-electric-power-sector-and-climate-change-mitigation.pdf
[83] http://en.wikipedia.org/wiki/Value_at_risk
[84] https://www.shell.com/energy-and-innovation/the-energy-future/scenarios.html
[85] https://hbr.org/2013/05/living-in-the-futures
[86] https://nff.org/
[87] https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwi-6JGil5fXAhUB1oMKHdi_D90QFggoMAA&url=https%3A%2F%2Fwww.c2es.org%2Fpublications%2Fus-electric-power-sector-and-climate-change-mitigation&usg=AOvVaw2NYpfxKh4KN7QDKD55VO6o
[88] https://www.businessnewsdaily.com/6363-debt-vs-equity-financing.html
[89] https://research.ece.cmu.edu/~electriconf/2004/krellenstein-CMU.ppt.pdf
[90] http://www.d20-ltic.org/images/AFME_Guide_to_Infrastructure_Financing_1_1.pdf
[91] http://www.youtube.com/watch?v=VpCp9DMPGkM
[92] http://www.irs.gov/pub/irs-pdf/p925.pdf
[93] http://www.investopedia.com/articles/06/capm.asp
[94] http://people.stern.nyu.edu/adamodar/New_Home_Page/datafile/wacc.htm
[95] http://en.wikipedia.org/wiki/Bond_credit_rating
[96] https://fred.stlouisfed.org/categories/32348
[97] https://fred.stlouisfed.org/series/BAMLC0A2CAAEY
[98] https://fred.stlouisfed.org/series/BAMLC0A4CBBBEY
[99] https://fred.stlouisfed.org/series/BAMLH0A3HYCEY
[100] https://fred.stlouisfed.org/series/BAMLC0A1CAAAEY
[101] https://finance.yahoo.com/quote/%5ETYX/?guccounter=1
[102] http://www.dsireusa.org/
[103] https://www.nrel.gov/analysis/green-power.html
[104] http://dsireusa.org/glossary/
[105] http://www.dsireusa.org
[106] https://www.epa.gov/renewable-fuel-standard-program
[107] http://apps3.eere.energy.gov/greenpower/markets/certificates.shtml?page=5