LBL Purchases Massively Parallel Computer (MasPar MP-2)

July 16, 1993

LBL has purchased its first supercomputer, a MasPar MP-2 massively parallel processing system. Equipped with 4,096 processors, the MasPar is the Laboratory's most important computer acquisition in a decade.

Information and Computing Sciences Division Director Stu Loken says that up until now, LBL researchers either have devised ways to solve problems without supercomputers or have accessed supercomputers elsewhere. Because these machines have been so expensive, relatively few exist, and time on them is difficult to book. Loken says the Laboratory's MasPar acquisition creates new opportunities for LBL researchers, providing them readily accessible state-of-the-art computing resources.

To help introduce Laboratory scientists to the MasPar and its parallel processing environment, Computing Resources is offering a number of services at no charge for the remainder of the fiscal year. The group will evaluate the suitability of using the machine for a proposed project and provide free training and support. To check out the MasPar, contact Ruth Hinkins at x5402.

The proliferation of research images and video was one of the chief reasons the MasPar was purchased. When video cameras are used to capture data, vast amounts of digital information - 30 frames per second or the equivalent of more than 2000 pages of words a second -- are generated. This digital flow must be processed and stored by a computer with a high-speed input/output interface as well as the ability to do video image processing. Loken says the MasPar is the best machine available to process and analyze visual scientific data -- everything from sky maps charting the structure of the early universe to medical images showing the neurochemistry in Alzheimer's patients.

According to ICSD's Bill Johnston who heads a team that is creating new hardware and software for the processing and analysis of a visual data stream over high-speed networks, the MasPar will significantly enhance the capacities of the Lab's distributed computing environment. LBL has played a pioneering role in the development of distributed scientific computing, forging new links that make the location of expensive computing resources immaterial.

In a distributed system, high speed networks can be used to tie together hardware and researchers around the country, connecting them to distant experimental facilities. Loken believes that the MasPar can help the Lab realize its vision of opening the Advanced Light Source to scientists working at remote locations around the world.

The technical specifications of the MasPar are impressive. A massively parallel Single Instruction Multiple Data computer, the MasPar has 4,096 32-bit processors that deliver peak performance of 17,000 million instructions per second. The supercomputer is capable of transferring data at rates of up to one gigabyte/second. The MasPar's input/output subsystem includes an 11 gigabyte RAID disk storage array which can transfer data at sustained rates of 18 megabytes/second. Users on the HiPPI network, to which the MasPar is connected, can transfer data as fast as 800 megabits/second. Despite capabilities that a decade ago were almost unimaginable, nobody yet knows how to instruct this supercomputer or any other to search a large image database. LBL is collaborating with the Sunnyvale-based MasPar Computer Corporation to develop technology to do this.

"Searching a text database for a word or string of characters and searching a video database for an object is a very different problem," said Johnston. "A computer easily can find every reference in a text database to `fish' but there is no ready way to look through an archival set of video images and find all those with fish. We'll be working on developing this technology for transfer to industry for use by the public."

To support parallel computing at LBL, Craig Eades, who heads LBL's Unix group, has created a new Parallel Computing Project. Headed by Ruth Hinkins, the staff on the project also includes Elon Close and Ludmilla Soroka. They are providing training in the PVM (Parallel Virtual Machine) system and Express, a set of tools designed to make parallel and distributed programming easy, efficient, and portable. Additional support is being provided by MasPar system engineer Jonathan Becher, who is here one day a week to train prospective users of the MasPar.

Currently, 19 LBL research groups are exploring the use of the new computing environment. The scientific problems involved include electron scattering, photoelectron diffraction, charged particle radiotherapy, underground imaging, analysis of light scattering, neural network research, and Monte Carlo calculations.

Eades acknowledges that writing code for a parallel processing system like the MasPar is different than the sequential programming traditionally used in computing. However, he notes that many research facilities are acquiring massively parallel supercomputers which are much less expensive than sequentially-programmed supercomputers. Like researchers elsewhere, LBL scientists must confront the challenge of parallel programming, says Eades.

"I see it as a question of remaining competitive," he said. "I want people to start thinking about solving problems in a new way. We have learned to write programs in a certain style. Now, the times they are a changing. We in Computing Resources are here to foster that change."

Computing with massively parallel computers

Massively parallel supercomputers like LBL's new MasPar MP-2 come with a reputation that precedes them.

People have spent years learning how to program computers which run through the program code sequentially, one step at a time. To program a parallel system, they must write code for thousands of processors that work in lockstep.

The people who support the MasPar admit that this transition is daunting but insist that researchers weigh this challenge against the prospective rewards and inherent advantages of computing on a machine with 4,096 processors working in unison.

What computing problems are appropriate for the MasPar parallel architecture? MasPar systems engineer Jonathan Becher says there are five indicators researchers can use to tell them whether their application and the MasPar computing environment are a good fit.

Age of the Code:

"There is a concept in programming called dusty decks," said Becher. "This refers to code that was written so long ago that nobody around has any insight into its logic. These are poor codes to be converted into MasPar. If the original author or someone who is maintaining the code is around, you have a higher probability of success. The best situation is if you are developing a new application. Then it's usually a more straightforward proposition."

Data Set Size -- The Bigger the Better:

Remember, says Becher, the MasPar has 4,096 processors. So if your data set has less than that number of elements or pieces of data, you probably don't want to use the MasPar.

"Suppose you are looking at an image that is 512 x 512 pixels or 262,144 pixels. So," said Becher, "each pixel gets assigned to a processor. Let's say you want to clean up the noise in an image, a very common operation. That involves a process called thresholding and filtering, and the process has to be run on each pixel. With a single instruction set machine like the MasPar, this single operation can be run on 4,096 pixels at a time. However, suppose your image is eight x eight pixels, or 64 pixels. The MasPar is not the right machine for you."

Compatible Application and Computing Architectures:

Becher says that essentially, computing solutions can be broken down into two types, one of which is ideally suited to the MasPar. "You can think of applications as either data parallel or control parallel," said Becher. "One example of a problem best done in data parallel is to think of a rowing team. The coxswain calls out the stroke command and each member strokes simultaneously but in a different part of the boat. Alternatively, in the control parallel approach, everyone would stroke independently of each other, and the boat wouldn't go as well.

"On the other hand, take the case of making a car, an assembly line, which is a control parallel model. Specialists are required for the multitude of jobs. We can imagine doing it in control parallel but that involves having 4,000 individuals each build a car one step at a time."

The MasPar is a data parallel system and problems that are control parallel are best done on another computer. Becher says perhaps 80 percent of problems are data parallel, citing a study by Syracuse University professor Geoffrey Fox.

Degree of Parallelism:

The MasPar's forte is using its 4096 processors to run a succession of identical or mostly identical operations as in the example above, involving image processing. If you must run a series of different operations on each pixel, then don't use the MasPar.

Data Independence:

Communication between processors should be limited whenever possible. "If you have some degree of communication, that doesn't mean you are not a good fit," said Becher. "It just means the less interaction between processors, the better the performance on all parallel machines."