The objectives of the civilian energy technology and related R&D programs, such as those focused on energy efficiency, pollution prevention, environmental management, renewable energy, coal, oil, and natural gas, largely aim at advancing technologies for use in the general economy. This means that the management and direction of such programs must involve not just technical experts, but also those who will ultimately manufacture, market, and use the technologies. This calls for collaborative modes of R&D review and conduct that fully engage participation among those who understand competitive markets and consumer demands.
Accordingly, many of the Department's energy technology development and related R&D programs are deliberately designed to accommodate industrial partners. In various ways, these industrial partners provide substantial opportunities for external merit review by engaging themselves as full participants helping to plan, execute, and commercialize the R&D. In addition, the Department makes extensive use of R&D procurement arrangements that not only involve industry, but require cost-sharing by industry. Section 3002 of the Energy Policy Act of 1992 establishes minimum cost-sharing thresholds of 50 percent for technology demonstration and commercialization projects, and 20 percent for all other civilian energy research. The resulting contracts thus benefit both from the routine competitive selection practices, as prescribed in Section 935.016-1 of the Department of Energy Acquisition Regulation, and from one of the most severe outside tests of research relevance, that is, substantial financial investment from industrial R&D partners.
At the Department's national laboratories, there is likewise a significant degree of external review of, and internal competition for, the energy technology development and related R&D programs. Every laboratory has an array of industrial advisory panels employed to review the R&D activities of each of its major research divisions. Individual research investigators must continually submit to a battery of scientific and technical reviews, both prospective and retrospective. Prospective evaluations include merit reviews of individual work proposals, almost always involving internal peers and sometimes involving external peers. Prospective evaluations also include multilevel internal reviews of the laboratories' formally submitted Field Work Proposals before they are sent to Departmental headquarters. Retrospective evaluations are performed on all R&D projects at least annually, but more typically are performed as an integral part of the course of ongoing research -- by colleagues, laboratory superiors, clients at Headquarters, as well as by peer reviewers of research publications. In addition, retrospective evaluations using peer review are employed on an ad hoc or sampling basis to review ongoing research involving specific projects, cooperative research and development agreements (CRADAs), and other forms of joint R&D.
Input from peers is also obtained from contractor review meetings, workshops, technical society meetings, and symposia. Fossil Energy programs and Energy Efficiency programs have made use on a selective basis of the Office of Energy Research's Office of Program Analysis to conduct formal, independent, retrospective peer reviews of their applied research projects.
Peer review processes in some elements of the Department's civilian R&D programs are currently undergoing significant enhancement. The Technology Development program of the Office of Environmental Management, for example, is instituting peer review at the program level (see below), and is strengthening the use of "focus area review groups" at the sub-program level. Beginning in Fiscal Year 1995, laboratory Field Work Proposals, known in the Environmental Management program as Technical Task Plans, will be reviewed by teams of subject matter specialists from technical, regulatory, business, and stakeholder perspectives.
Virtually all major energy technology development and related R&D programs are periodically subjected to higher level overall program reviews involving extensive use of scientific and technical experts and industry stakeholders. The most visible of these are review committees of the National Academy of Sciences and the standing Departmental advisory committees constituted under the auspices of the Federal Advisory Committee Act. These bodies are asked primarily to comment on the content and direction of the R&D programs, including their 5-year R&D plans and associated strategic plans.
In the Technology Development program of the Office of Environmental Management, for example, top-level program reviews are conducted by the Environmental Management Advisory Board and, beginning in Fiscal Year 1995, a newly established Committee on Environmental Management Technologies of the National Academy of Sciences. Similarly, the Office of Fossil Energy is advised by the National Petroleum Council and the National Coal Council. Altogether, there are eight active committees advising the civilian energy technology and related R&D programs.
Finally, with the implementation of strategic planning and Total Quality Management principles throughout the Department, most key planning and programming decisions are now evolved in full view of and with broad participation from outside stakeholders. For example, the Department's recently developed multiyear plan for Integrated Resource Planning was distributed to 350 stakeholders in the electric and natural gas utility industry, with formal comments received from 40 reviewers. In the Department today, every such plan must evidence extensive use of outside independent participation, review, and comment.
The Department's national security responsibilities require highly integrated, multidisciplinary, multiyear team efforts. These requirements are imposed by both the complexity and seriousness of the nuclear weapons enterprise. The Department must maintain its responsible stewardship of the nuclear weapons stockpile and preserve the special nuclear weapons technology infrastructure and core competencies that may be needed in future national security situations. At the same time, it must dismantle nuclear weapons and dispose of special nuclear materials, as specified by international agreement, and contribute to the enforcement of arms control agreements and to the prevention of the proliferation of nuclear weapons. The R&D needed to support these missions requires unique facilities, special materials-handling procedures, and highly classified know-how that, while amenable to technical review and peer review, are not always amenable to the same kind of peer review processes that are employed in the realm of unclassified research.
The Department has established, for example, formal peer review processes in the Office of Defense Programs. Weapons life-cycle activities are addressed by formalized joint Department of Energy-Department of Defense project teams whose members come from both organizations. The Nuclear Weapons Council provides a high-level mechanism for advising on Defense Programs directions. Interaction with the Department of Defense also provides close customer feedback on major aspects of program performance.
The Department also uses formal committees composed of outside experts to review or advise on Defense Programs, including the Safety, Security, and Control Committee; the Weapon Safety Advisory Review Group; and the Inertial Confinement Fusion Advisory Panel. The Containment Evaluation Panel and the Threshold Test Ban Review Panel have also reviewed issues related to nuclear testing.
Defense Programs also uses independent outside expert groups, such as JASON (a highly qualified advisory body of scientists), to review its classified programs. The National Academy of Sciences has also reviewed Defense Programs technical activities. A large amount of unclassified research conducted within the Defense Programs is published in open peer-reviewed journals. There is also a classified peer-reviewed journal to which laboratory researchers actively contribute.
In the case of nuclear device design and much of the related weapons science and technology, detailed review requires active expertise, and there exists no broad industrial or university base from which to draw such experts. Historically, technical competition has proven invaluable in this field and peer reviews are so designed into program activities in large part by the existence of two nuclear design laboratories, at Lawrence Livermore and Los Alamos. One-on-one interactions between researchers in highly classified but related fields at these two laboratories add considerably to the quality improvement process at both laboratories.
Sandia National Laboratory employs an effective means of intramural review, using "red teams" to ensure the safety and reliability of Sandia components and processes. Defense Programs has further established a formal interlaboratory (Los Alamos, Lawrence Livermore, and Sandia) peer review process for specific weapon R&D, certification, and surveillance activities. For example, every five years, with annual updates, Lawrence Livermore-Sandia and Los Alamos-Sandia teams in the Weapons Assessment Process conduct peer-reviewed studies of each other's stockpile weapons.
Recent M&O contracts for Los Alamos and Lawrence Livermore require the University of California to conduct annual science and technology self-assessments stressing external peer reviews with specific criteria. These are being implemented using evaluations by appropriately constituted external review committees of experts. These committees, taken together, evaluate all technical activities at these laboratories. The University of California President's Council Panel on National Security reviews the weapons programs of Los Alamos and Livermore. Panel members include technical experts drawn from outside the University of California and laboratory communities. These and other mechanisms are used to assess and maintain quality in these programs.
More than 20 Federal agencies carry out R&D programs. Of these, the Department of Energy's R&D program is one of the largest, being responsible for about 10 percent of the total Federal R&D budget of $72 billion in Fiscal Year 1994. In addition, the Department of Energy has perhaps one of the most diverse set of missions, complicated by the unique demands of nuclear weapons design.
Because of this diversity and size, the Department's R&D programs taken together resemble the many facets of Federal R&D programs as a whole. Similarly, the Department's application of peer review principles and methods share many of the strengths, as well as some of the weaknesses, of such practices as applied to Federal R&D in general. Other agencies, for example, use an array of peer review methods, at all organizational levels, to promote quality, relevance, and productivity in R&D programs. The Department, likewise, applies these methods to the different levels in the management process hierarchy, and to the different types of R&D activities, as is most appropriate to each situation.
The National Institutes of Health, the National Science Foundation, and many parts of the Department of Energy's fundamental science, health and environmental research, and basic energy sciences programs all have extensive external research programs in the physical and life sciences. Each agency uses similar prospective peer review methods, by mail, or by panels, before funding proposals. Some agencies with their own laboratories also make available their research facilities for the benefit of other users, such as the National Aeronautics and Space Administration's wind tunnels. Research at such user facilities, like that at the Department's facilities, is merit-reviewed using prospective peer reviews.
Like the Department of Energy, the Departments of Defense and Commerce (the National Institute of Standards and Technology), the National Aeronautics and Space Administration, and, to some extent, the National Institutes of Health (NIH) all conduct internal laboratory research programs. Each agency relies primarily upon in-progress, retrospective reviews for guiding and gauging its internal laboratory research.
In the area of basic research, the National Institutes of Health is an agency often cited as a model for emulation in its use of merit reviews with peer evaluation. Ninety percent of the research activities at NIH are external, and are subjected to a two-stage review process. In the first stage at NIH, a panel of 15 to 20 scientists, experts in the relevant field, read each proposal. Generally, three panel members review each proposal in detail against specified criteria and prepare formal briefs, while the other panelists familiarize themselves with each proposal. All panelists take part in a group discussion and vote formally. The panel then reports to a National Advisory Council for the second stage. Each institute of the NIH has a single National Advisory Council of at least 12 members, not all of whom are necessarily scientists (in most proposals, there are considerations beyond pure science).
Review of internal laboratory research at the NIH is conducted by the Board of Scientific Counselors for each institute. Each board consists of outside scientists chosen for their expertise related to each institute. However, it should be noted that many Board members are funded by the institute under review.
An authoritative critique[Note 19] of the NIH peer review system concluded that (a) the excellence of the overall NIH research program is built on a variety of approaches to managing research, using both prospective and retrospective reviews; (b) prospective and retrospective peer review have different strengths and weaknesses, and encourage creativity in different ways; and (c) the overall NIH research program was best served by retaining prospective review in its external (for example, R&D support via grants) programs and retrospective review in its internal (for example, in-house laboratory) programs.
As strong as the NIH and other agency peer review practices appear to be, in each area where commonality exists among research kind (for example, basic research) and communities (for example, universities, research centers), the Department of Energy has well-established peer review practices that are quite comparable and, perhaps, better in some areas. This comparability notwithstanding, the Department can only benefit by examining more thoroughly and understanding more completely the best practices of other agencies. To this end, the Department intends to continue its study of other agency practices, participate in interagency forums on peer review, and implement some pilot programs to test innovative approaches.
The sharing of peer review strengths, however, means that the Department may also share some of its weaknesses. The process of merit review with peer evaluation, in general, is under pressure and has been criticized by many in the research community, in part, due to its cost, complexity, administrative burden, lack of available peers, slowness, and questions about equity and fairness. Even with these concerns, however, peer review is still widely regarded as the best method available for allocating scarce R&D resources. Accordingly, the Department of Energy seeks ways to both respond to these concerns and develop improved peer review systems, as outlined below.
As documented in this paper, the Department of Energy uses peer review extensively throughout its R&D programs to both guide research direction (prospective peer review) and gauge research progress (retrospective peer review). In many instances, both forms of peer review are applied to the same research activity. The Department's peer review practices in many of its more mature R&D programs may be counted among the best practices of all agencies. Peer review practices in some of the more recently established and growing R&D programs are evolving and being strengthened. Virtually all major R&D programs experience multiple levels of review by qualified and independent review and advisory committees.
External R&D activities conducted via grants, contracts, and cooperative agreements are governed by an elaborate system of statutory, regulatory, and procedural requirements that virtually ensure that the vast majority of R&D awards are subjected to merit reviews with peer evaluation and competitive selection. Internal laboratory R&D activities are likewise subjected to multiple reviews by peers, both prospective and retrospective, with increasing competition. Retrospective merit reviews with peer evaluation have been confirmed by independent studies as an effective means for promoting research relevance and productivity in the laboratories. Moreover, in many Departmental laboratory R&D programs, retrospective reviews are increasingly being supplemented by prospective reviews of laboratory Field Work Proposals, where appropriate. Administrative requirements for cost-sharing and joint planning of applied R&D with industry add further to the checks and balances of R&D management.
In April 1994, the Department reaffirmed its strong commitment to peer review in its strategic plan, Fueling A Competitive Economy, by specifying that an important "success indicator" for its science and technology programs is
Recognizing the importance of peer review, having surveyed peer review practices at other Federal agencies, and having reviewed the suggestions of such experts as Chubin and Hackett,[Note 20] Bozeman,[Note 21] and Kostoff[Note 22] for the evaluation and improved use of peer review, the Department intends to strengthen further its use of peer review, in forms appropriate to its missions, in all of its technical programs, and at all levels of decisionmaking. In so proceeding, the Department recognizes that serious reviews can impose major costs on those being reviewed, as well as on the reviewers and supporting staff. Peer review systems can introduce significant delays in R&D program execution. If implemented too rigidly, peer review systems can stifle flexibility and creativity. The experiences of several R&D agencies suggest that it is possible to create elaborate systems of overlapping reviews that are unnecessarily complex and burdensome. Being aware of these potential risks, the Department has identified three broad areas for improvement.
First, while recognizing the need for flexibility and efficiency, the Department of Energy will seek to enhance the use and application of peer review at all appropriate levels of R&D program management and execution.
Second, the Department of Energy's management of its peer review processes will be strengthened, including the establishment of guiding policies and principles, improved oversight, and broadened documentation of use.