Using the Models: All for One, and One for All
Let’s mentally conduct a simple but revealing test of human fallibility. Imagine that the first person in a fairly long line of people whispers a bit of information to the second person in line. Then the second person whispers what he or she heard to the third person, and so on. By the time the chain of whispers reaches the last person in line, the original bit of information has likely been jumbled and changed. These changes or “errors” grew as each person in line slightly altered the previous whisper as she or he passed it on.
In similar fashion, a computer model takes the initialized state of the atmosphere and then passes on its numerical forecast to the first virtual time interval (perhaps a minute or two into the future). In light of the inherent errors associated with the leap-frog scheme used by computer models (not to mention the imperfect process of initialization), it should come as no surprise that the forecast is a bit off the mark. The first time interval then passes on this error to the second time interval, which passes on a slightly different version to the third time interval, and so on. By the end of the forecast run, slight errors near the start of the simulation will have likely worsened.
Now suppose that there are two lines of people of equal length, and the first person in each line whispers an identical message to the second person. In keeping with the fallibility idea, the last two people in each line would likely hear two different messages. However, an independent interpreter, after listening to the last two people in line, might be able to blend the different messages and thereby deduce a more accurate form of the original whisper.
In a similar way, forecasters can take the forecasts of several computer models and compare them. Even though these forecasts may not be the same, an experienced meteorologist can blend the predictions of the models (a consensus forecast), with the hopes of improving the forecast.
When different models essentially produce the same forecast for a specific area at a specific forecast time (in other words, the regional differences between the models are slight are slight), meteorologists have high confidence in the computer’s solution. Then again, the models might all be wrong, which, unfortunately, happens occasionally. For example, a heavy snowstorm surprised the Eastern Seaboard on January 24-26, 2000 (see the enhanced infrared satellite image above). The short-range computer models had unanimously predicted the storm would track too far out to sea to be of any consequence. Poignant examples like this snowstorm excluded, the consensus among computer models usually translates into a more accurate forecast.
In situations when different computer models have noticeably different predictions for a specific forecast area at a specific forecast time, experienced forecasters might lean toward the model they believe is more likely to verify. How would forecasters decide which one to choose?