Forecasting Strategies: Playing More Than One Bingo Card
Our approach to using computer guidance in weather forecasting is akin to playing bingo ... the more bingo cards you play, the greater the odds that you'll win. In the context of computer guidance, this bingo approach means that we usually try to avoid choosing "the model of the day" and putting all of our forecasting eggs into one basket. Rather, we strive to include the input of as many computer models as we can into our forecasts.
To show you the kind of difficulties you can encounter by choosing the "model of the day", check out (below) the lower-right panel of the 12-hour GFS forecast intialized at 12Z on July 27, 2009 (here's the entire four-panel prog). For the record, the forecast was valid at 00Z on July 28). Focus your attention on the Southwestern United States. The GFS did not predict any rain in New Mexico and Arizona during the six-hour period ending at 00Z on July 28.
And now for a reality check. The 2245Z visible satellite image from July 27, 2009 (shown below) belies the GFS forecast. Indeed, thunderstorms had already erupted over the region during the afternoon, primarily in response to high-level heat sources (the mountains). Just to confirm that measurable rain fell, check out the estimated rainfall from the radar at Albuquerque, New Mexico as of 2357Z on July 27. Granted, rainfalls were pretty spotty within the radar range of Albuquerque, but there are totals between 1.5 and 2 inches in a few spots. So let's all agree that the GFS forecast, at the very least, gave an overly dry impression of the day.
One reason that the GFS had difficulty predicting these thunderstorms lies with the limitations of the mathematical scheme used to simulate the complex process of convection (such a scheme is called convective parameterization). The reality that numerical weather prediction must face is that its ultimate goal of perfectly simulating convection (thunderstorms) by computer is essentially hopeless – the computing power needed to resolve such small scales currently lies out of our technological reach.
Even the WRF, which is a mesoscale model whose purpose is to better predict smaller-scale features, didn't exactly hit a home run over the Southwestern United States on July 27, 2009. Here's the 12-hour prog from the WRF initialized at 12Z on July 27, 2009 (valid at 00Z on July 28). Focus your attention on the lower-right panel. Yes, the WRF made a better forecast than the GFS, but it left a lot to be desired. Still, by not focusing solely on the GFS and taking the WRF into account, a forecaster might get a sense for the possibility of thunderstorms associated with high-level heat sources.
Later in the chapter, we'll talk about ensemble forecasts. For the record, an ensemble forecast is a set of predictions (all valid at the same time) from a series of models that are run with varying initial conditions. Each model run is called an ensemble member. Check out this ensemble forecast for the maximum precipitation predicted by any single ensemble member (there were 21 members) during the six-hour period ending at 00Z on July 28, 2009. Again, the ensemble forecast was not perfect over the Southwestern United States, but it sure would have drawn your attention to some models predicting rain from showers and thunderstorms, especially over New Mexico.
We offer this preview of ensemble forecasting to make a first impression that looking at more than one computer model is like playing bingo with more than one card ... it increases the odds of winning (odds are higher that you will make a more accurate forecast). Such a strategy is part of the list of general attributes we believe that all good weather forecasters have. Good forecasters:
- Possess a working knowledge of the scientific principles and concepts that govern the behavior of the atmosphere.
- Possess a working knowledge of forecasting principles and techniques.
- Gain enough experience to know which principles to apply to any given situation.
- Develop the ability to interpret statistical and numerical model guidance.
- Acquire knowledge of how a general forecast must be modified to account for local effects at a specific site.
So a good forecaster would have noticed that computer models predicted a closed 500-mb high over the Southwest ... note the closed 594-decameter height contour on the upper-left panel of the 12-hour WRF prog below (valid at 00Z on July 28). Knowing that the 500-mb high would promote relatively warm weather, and that the WRF and some of the ensemble forecasts indicated rainfall, forecasters put two and two together and predicted high-level heat source thunderstorms.
Having just warned you about "choosing the model of the day," we point out that forecasters can still glean valuable insight by looking for trends in successive runs of individual models. For example, the predicted track of a low-pressure system may shift progressively farther north on successive model runs, indicating to forecasters that they might consider adjusting their forecast to account for the low’s more northward path. By trending toward a particular evolution of a storm over successive simulations, a single computer model might point the way to the ultimate solution to the forecast problem. That’s the idea behind the useful forecasting mantra “the trend is your friend.” Of course, forecasters would also weigh computer guidance from other models before making any final decision.
The internal consistency of individual models is another consideration that experienced forecasters routinely weigh. Forecasters have higher confidence in models whose successive runs do not waver from a specific weather scenario. Fickle models that flip-flop between different solutions do not instill much confidence, so forecasters usually discount them. When computer models suggest the potential for a big snowstorm, a few "old-school" forecasters wait until three consecutive model runs are internally consistent before going public with specific predictions for snowfall. Even though this wait-and-see approach is somewhat conservative, jumping the gun with regard to predicting snow totals sometimes results in forecasters having "egg" on their collective faces, which only serves to diminish public confidence in the profession of meteorology.
Forecasters also check to see if a model’s solution agrees with evolving weather conditions. For example, if a particular six-hour forecast from the WRF predicted rain in southern Ohio with temperatures just above 32oF (0oC), but observations at that time indicated heavy snow with temperatures several degrees below 32oF (0oC), forecasters would tend to discount (or at least view with a wary eye) the model's solution beyond that time.
Until now, we've focused our attention on short-range forecasting, which typically deals with a forecast period that covers 48 hours into the future. Let's now turn our attention to medium-range forecasting.