|Tom McKone: What Models Can (and Can't) Tell Us About Risk|
|Contact: Allan Chen, [email protected]|
Scientists use computer models to estimate how pollutants distribute themselves in the environment. How far can we trust these models? How much can they tell us about health risks, and what are their limits? Measuring exposure to environmental pollutants and assessing the hazards to health that they pose is a complicated endeavor with interesting scientific challenges, and some nonscientific ones too.
Thomas McKone and his colleagues study the physical processes by which pollutants migrate through the environment. McKone is a senior staff scientist in the Environmental Energy Technologies Division, where he leads the Environmental Chemistry, Exposure and Risk Group, as well as an adjunct professor in UC Berkeley's School of Public Health. Researchers in McKone's group and throughout the Lab conduct field studies of many types of pollutants, ranging from particulates and pollutants in the air to hazardous chemical and radioactive wastes.
The computer models McKone's work focuses on aid the basic science of pollutant transport and also provide tools that can help policymakers decide whether and how to regulate the chemicals that pose threats to human health. His group developed the CalTOX model, which was originally created in the early 1990s to help California's Department of Toxic Substances Control develop goals for cleaning up contaminated soils and adjacent air, surface water, sediments, and ground water. The model incorporates multimedia transport from or to ground, air, and water, and can estimate multiple-pathway exposures in humans. CalTOX continues to be widely used not only for setting clean-up goals but also for comparative risk assessments and life-cycle impact assessments.
McKone's expertise is frequently sought by U.S. science advisory bodies. Early in 2006, he sat on a National Academy of Sciences (NAS) panel that released a revised assessment of health risks from exposures to dioxin and dioxin-like chemicals. McKone currently sits on two NAS panels, one titled "Environmental Decision Making: Principles and Criteria for Models" and another called "Improving Risk Analysis Approaches Used by the U.S. EPA." The environmental models committee is scheduled to release its report early in 2007; the risk analysis committee plans to release its reports in the summer of 2008.
McKone has also worked with colleagues at Trent University, Canada, and the Swiss Federal Institute of Technology to develop a multimedia fate and transport model called BETR (Berkeley-Trent). BETR-North America addresses the continental scale transport and distribution of persistent pollutants and BETR-Global incorporate features of general circulation models of the atmosphere to study the long range transport of pollutants.
What is a risk? What is a hazard?
Understanding the field of risk assessment means taking some care with its terminology, says McKone. "What is the probability that a human will get cancer?" is a question about risk, he says. "A hazard, however, involves human possibility. If you can show that a chemical causes cancer, then you have shown that it is a hazard." Everyone is at risk of getting cancer some more, some less. Exposure to a carcinogen, a cancer-causing chemical, is a hazard, because it carries the possibility of increasing your risk of disease.
"Science can measure exposures and set up experiments to demonstrate hazard, based on occupational or other exposed groups or based on animals studies," he says, "but you cannot do a scientific experiment to assess human risk. Risk assessment is not a science," although it does have a foundation in toxicology and chemistry.
McKone's NAS committee on improving risk-analysis approaches is one of a series of NRC committees formed during the last 25 years to issue guidelines defining how risk analysis can best be applied by Federal agencies like the EPA that are faced with assessing risks posed by environmental pollutants. The NRC first issued risk-assessment guidelines in 1986, then revised and extended them in 1996. A review of risk analysis science is again in progress.
Help from computer models
Scientists conduct field studies to understand how pollutants, such as those classed as persistent organic pollutants (POPs), disperse through the environment, traveling from air to water to soil, and so on into living plant and animal tissue. Scientific field studies are always limited by time and location, however.
So to get the bigger environmental picture, scientists create computer models that can estimate where the chemicals are going to go, how much will get into the air, the water, or the ground, and how long they'll stay there before they break down into simpler chemicals or combine with others. Persistence is measured in half lives the time it takes for a POP to decrease in concentration by half its original value at the time of measurement. POPs attract considerable attention from scientists and regulators because they take so long to break down. This means they have more time to diffuse all over the earth, which increases human exposure to these sometimes carcinogenic and mutagenic chemicals.
McKone sees models as descriptors of the physical and chemical processes that govern the behavior or chemicals in the environment. "You can build relationships between factors that you can't otherwise do without a model," he says, including chemical properties, transport within and among different media, and abundance in the environment. Turning the crank on the model results in snapshots in space and time of what pollutant concentrations might look like, and what the potential human exposure to them might be. "Because of complexity, you can relate things in a model that you couldn't in your mind, because there is too much to keep in your head. A model puts all these pieces together."
But just understanding how the pieces fit together doesn't guarantee correct results, McKone says. "You can still get results that don't correlate to the real thing. So models are both potentially powerful, and potentially dangerous."
While a model can hint at what interventions have the best chance of reducing pollutant concentrations and exposures, "A lot of people think models provide predictions," McKone says, "but they don't do this. Models are not very useful if you don't have something with which to anchor them. You need observations to confirm the model and move it closer to a representation of reality."
This is a particular problem for policymakers, who "don't like to make choices involving uncertainty," McKone says. "A danger is that they may just use model results to tell them what to do."
Too much information
"Adding more detail into a model doesn't necessarily get you a better result if you don't understand the basic science," says McKone. "Model development has to be paced with the science."
He cites an example: "Years ago, when I was a graduate student, I saw a regional pollutant model for radionuclide-transfer from soil to vegetation. It was a square grid with lots of detail airflow, crops growing in specific areas and it was used to calculate crop uptake. But there was only one experiment done that provided data, and that didn't account for the uptake to plants by species type. Seasonal uptake of some species could vary by a factor of 10 to 50. The spatial variation was less than a factor of 10, but the uptake by crops was uncertain by more than a factor of 10, so the added detail did not improve the model result."
Say McKone, "The reliability of the calculation depends on the reliability of the least well known element. If you don't know how uncertain this weak link is, then you are making the model results look more accurate then they really are." This is why performance evaluation of existing models, especially those currently in use by federal agencies to guide the regulation process, has become a hot area of research.
Thanks to continuing support from the EPA, McKone conducts regular R&D to improve models for use in experimental studies and risk assessment. "Some of the questions we ask are: What are the critical uncertainties? What processes do you need in a model, and what can you do without?"
He stresses that "an important quality in a good model is called parsimony. This is defined as making the model as complicated as needed to solve a problem, but not more so. You don't want to add details that make the model overly complicated."
A model model study
McKone and Matthew MacLeod, formerly of Berkeley Lab's Earth Sciences Division and now at the Swiss Federal Institute of Technology in Zurich, brought together researchers from nine universities and institutions around the world to work on a unique study intended to make models of persistence and long-range transport of POPs more useful to the policymaking community.
"We worked with an international group of researchers, nine groups in all, each with their own model," says McKone. The researchers were located in Switzerland, Germany, Canada, France, Italy, and Japan. "To my knowledge, no study of this kind has been undertaken before. We ran all nine models on various persistent organic pollutants to characterize them, and to compare the models' output for the same chemical against one another, to see how well each model characterized these chemicals."
McKone says they soon realized they needed to create a "surface of possible properties. We found that there were four important properties that characterized these chemicals. We created 4,000 'chemicals' not real ones, but imaginary chemicals with idealized properties and we ran all the models through this 'space' of chemical properties." The goal was to see whether the existing models produced consistent or conflicting results, and from this information develop a consensus model that captured the minimum set of essential components.
When they compared the output of the various models they found there was a lot of commonality, "but also there were subtle differences. There were areas where the models had the same results and areas where they diverged from one another. So all nine teams proposed a model that included the elements of all nine models that led to common results. And we resolved elements that produced divergent results. This process seemed to be leading to the simplest model possible for solving the problem that nonetheless had enough detail and complexity to accurately model the result properly."
The four that mattered
Four chemical properties determined the behavior of POPs in all nine models. Two were solubility ratios: the solubility of the POP in air divided by its solubility in water, and the oil-water solubility ratio. The latter is an indicator of the POP's mobility, based on how much it sticks to soils, sediments, and the lipids (fats) in biological organisms. A POP that accumulates rapidly in fat tissue is a cause for concern, since human beings will build up high levels over a lifetime.
The two other properties were the chemical half-life in air and the chemical half-life in water. The latter half-life is a good measure of persistence in surface waters, soils, and sediments. Some POPs, even if they are persistent, can be volatile, meaning that they will cycle rapidly through different media. A persistent, volatile POP needs to be treated differently than a persistent, stable POP.
"We are not just developing more models, or refining them, or improving the user interface," says McKone. "Our group's goal is to ask 'how do decision makers use models? What is it that they need to do their work effectively?' Then we determine what it is you can do to make the models more effective."
The qualities they assigned to the hypothetical POP not only helped demonstrate that the nine models produced similar results but showed that policymakers should single out real POPs with these specific qualities, because they affect how POPs behave in the environment.
What makes a good model result?
In addition to parsimony, McKone mentions two other qualities that make a useful model: "One of the things decision makers want is transparency. They need to know how the model works the method has to be transparent to the world."
Finally, McKone says, in order to "answer the relevant science and policy questions," models need fidelity. "So in addition to making the model transparent and as simple as possible, you must incorporate all the processes that are important in linking the final result to the factors that, if changed, will alter that result. You are always walking a fine line between how much detail you need to get fidelity, while not incorporating so much detail that it overwhelms the final users."
The problem of making models useful to nonscientists is not specific to environmental pollutants. From his conversations with colleagues, McKone has learned that it comes up in plenty of other questions, like forecasting the weather.
There are plenty of sophisticated, supercomputer-based weather models, says McKone, but many daily forecasts are based on judgment, perhaps augmented by simple PC-based, plug-in models that incorporate rules of thumb. "The bottom line is that no one wants to be overwhelmed with data. We all want just the few basic results that are useful to us."
Additional informationWeb sites for National Academy of Science Panels and information on models: