|What can history teach us about forecasts of energy use?|
|Contact: Allan Chen, [email protected]|
Energy forecasters underestimate the importance of "surprises" in their forecasts, and this has led many of the major energy forecast studies of the last 50 years to overestimate the amount of energy they predicted the U.S. would be using by the year 2000.
A new article by Ashok Gadgil and Jon Koomey, of Lawrence Berkeley National Laboratory, and Paul Craig, Professor Emeritus of Engineering at the University of California at Davis, published in the 2002 Annual Review of Energy and the Environment, assesses the success of many of the major long-term energy forecasts published in the United States dating back to the 1960s.
"Our basic conclusion," says Koomey, a scientist in the Environmental Energy Technologies Division and leader of Berkeley Lab's End-Use Energy Forecasting Group, "is that forecasters in the 1950 to 1980 period underestimated the importance of unmodeled surprises. One of the most important examples is that they failed to foresee the ability of the United States economy to respond to the oil embargoes of the 1970s by increasing its energy efficiency. Not only were most forecasts of that period systematically high, but forecasters systematically underestimated uncertainties."
The authors of the article write that "Energy forecasters working in the aftermath of 1970s oil shocks expended enormous effort in projecting future energy trends." For example, forecasts of energy use in the year 2000, compiled in a 1979 review by the U.S. Department of Energy, Energy Demands 1972 to 2000, varied enormously. Actual U.S. energy use in 2000, which is superimposed on the graph, was at the very lowest end of the forecasts.
"Energy use turned out to be lower than was considered plausible by almost every forecaster," says Craig. "Forecasters didn't anticipate the ability of the economy to limit the growth of energy use." Forecasts conducted after the first shocks in 1973 did not predict the ability of the economy, ranging from energy-intensive industries and small businesses to homeowners, to respond to higher prices by adopting more energy-efficient technologies and practices.
Types of forecasts, their strengths and weaknesses
For example, trend projection was the method used in the last official government forecast before the oil embargo of 1973, the Department of the Interior's United States Energy Through the Year 2000, published in 1972. It assumed that energy use would grow as it had for the past two decades, at a constant yearly rate of 3.6 percent per year -- an exponential growth rate. The study forecast total primary energy use of 201 exajoules (EJ) in the year 2000 (an exajoule is a million trillion joules). Actual energy use in 2000 was 105 EJ, so the study overestimated energy growth by almost a factor of two.
A weakness of trend projection forecasts "is that they discourage searches for underlying driving forces," says Gadgil, a scientist in the Environmental Energy Technologies Division and leader of the Airflow and Transport Group. These models do not identify the causes of the trends they extrapolate, so they can't adjust when conditions change and establish a different trend. The authors argue that econometric projections, more sophisticated versions of trend projections, have the same weaknesses.
Of course each type of forecast has its own strengths and weaknesses. All forecasts are wrong in some respect, but if the process of designing them teaches something about the world and how events may unfold, creating them will have been worth the effort.
Learning lessons for better energy forecasts
Statistically derived relationships between variables are the ones most likely to be altered by major events and policy choices. A significant example is the situation following the oil embargo of 1973: energy growth no longer tracked economic growth because of the adoption of energy efficient technologies and practices. "Forecasts that assumed continuance of historic relations between economic and energy growth were grossly wrong," says Koomey.
The authors warn against becoming obsessed with technical sophistication. "Beware of big complicated models and the results they produce," says Gadgil. "Generally they involve so much work that not enough time is spent on data compilation and scenario analysis."
"Expect the unexpected and design for uncertainty" is advice that pertains to making the forecast as robust as possible in the face of imperfect forecasts that fail to anticipate the unexpected. "If the key variables are impossible to foresee," the authors write, "then adopt strategies that are less dependent on forecasts."
"Communicate effectively," says Gadgil. "Forecasts can be technically strong but fail to influence their target audiences, because of poor communication of the results." Several studies published in the 1970s were controversial at first, but their authors engaged in a vigorous public defense of their findings, and eventually these studies were widely accepted by other researchers and policymakers.
A forecast, says Koomey, is useful because it "illuminate[s] the consequences of choices so that the people and institutions making them can evaluate the alternative outcomes based on their own values and judgment. Hidden assumptions and value judgments exist in every forecast, but the best forecasters make these explicit, so that users of their work are fully informed."
"What can history teach us? A retrospective examination of long-term
energy forecasts for the United States," by Paul P. Craig, Ashok
Gadgil, and Jonathan G. Koomey, appears in the Annual Review of Energy
and the Environment 2002, Vol. 27: 83-118