Science@Berkeley Lab nameplate Berkeley Lab logo
March 31, 2005
 
Designing Science-Friendly Supercomputers
Berkeley Lab Researchers Launch a Process to Ensure Science-Driven Computer Architecture

Not so long ago, every computer in the world was a supercomputer. Machines with names like ENIAC and MANIAC ran on vacuum tubes and filled rooms and were dedicated to solving scientific problems (although often of a military nature). It was a short-lived era. Today millions of far more powerful machines sit on desktops, most of them doing anything but science.

Today's supercomputers are certainly faster and more powerful than desktop PCs, but despite dramatic increases in computational performance their ability to carry out scientific applications has fallen steadily behind their supposed capabilities. At least until recently.

Chips made for the home and business market are some of the most advanced microdevices in the world, but they are not optimized for scientific computing.

Part of the problem is that computer makers have followed the money, understandably, into homes and private businesses. PCs are manufactured by the millions and subject to intense competitive pressures, and their processor chips have to be speedy, sophisticated devices — some of the most advanced microdevices in existence. But a PC's applications don't access memory the way scientific applications do, so when the same kinds of processors are clustered together — which is one way of building a supercomputer — and used for doing science, they are incapable of reaching anything like their theoretical peak performance.

It's a problem that custom-designed supercomputers like Japan's Earth Simulator don't have. The huge Earth Simulator was built virtually from scratch, with its scientific sponsors (principally from the Japan Marine Science and Technology Center) and contributing computer vendors (led by the NEC Corporation) closely collaborating from the beginning to build a machine optimized to investigate geosciences and environmental questions on a global scale. When the Earth Simulator debuted in the spring of 2002, reaching 87 percent of its theoretical peak performance and running at 35.6 teraflop/s (35.6 trillion floating-point operations per second), almost five times faster than the world's next-fastest machine, it inspired computer scientists around the world to have a closer look at the role of supercomputers in science.

In the U.S., the Department of Energy, the Department of Defense, and other agencies formed committees and launched investigations. A typical conclusion, like one from a 2004 report by the High End Computing Revitalization Task Force, was that "the 1990s approach of building systems based on commercial off-the-shelf components" was not the best way to tackle applications of national importance, and that an interagency collaborative approach was needed to find alternative technologies.

Independently, a team of computer scientists and mathematicians at Berkeley Lab, working with researchers who depend on high-end computation in such diverse fields as fusion energy, molecular biology, astrophysics, nanostructures, combustion, and climate modeling were already at work. They were led by Horst Simon, director of Berkeley Lab's Computational Research Division and of DOE's National Energy Research Scientific Computing Center (NERSC).

The most recent version of the Berkeley Lab team's conclusions about the most effective way to design supercomputers appears in the January, 2005 issue of the Journal of the Earth Simulator under the title "Science-Driven System Architecture: A New Process for Leadership Class Computing." In a nutshell, they conclude that "the most effective approach to designing a computer architecture that can meet these scientific needs is to analyze the underlying algorithms of these applications and then, working in partnership with vendors, design a system targeted to these algorithms."

But the team did more than just produce another report. Beginning in 2002, with partners from other institutions, they worked closely with IBM to design a machine optimized for a range of scientific applications, a design they dubbed Blue Planet. Blue Planet emphasized flexibility, for not all scientific problems are susceptible to the same mathematical approach or have the same needs for memory access and organization — differences that, among others, impact hardware capabilities and organization.

The collaboration identified such concerns as the performance of individual processors in a parallel-processing scheme, the performance of their interconnections, and software adaptable to different kinds of jobs. They found specific problems like "memory contention" in IBM's existing multiprocessor design, which severely affected the performance of processors sharing the same interface with main memory. They devised Virtual Vector Architecture (ViVA), a way to create powerful virtual vector processors with very high performance from groups of individual processors in a node.

IBM has already applied technical advances resulting from the Blue Planet scheme for meeting scientific computing needs to ASC Purple, the next supercomputer for scientific applications, and further improvements are in the pipeline.

The result of these efforts was a Blue Planet processor-node design that has been incorporated by IBM in the design of its new generation of microprocessors. One immediate effect was the application of the Blue Planet node design to the construction of the ASC Purple supercomputer at Lawrence Livermore National Laboratory, scheduled to demonstrate a performance of 100 teraflop/s in the summer of 2005.

Many possibilities for improving supercomputer performance remain, including the refinement of virtual processor architectures. But the dramatic advances already achieved clearly confirm the importance of including scientists in the design process from the beginning — especially since, as the Berkeley Lab researchers concluded, neither off-the-shelf cluster machines or highly expensive special-purpose machines are the best choice for the future: "The high-performance systems of the future have to be balanced in many ways, since the scientific applications of the future will combine many different methods. There is no longer a single method that dominates."

Additional information

 
Top