Data Mining in Search of Hidden OilSeptember 5, 1997By Paul Pruess, paul_pruess@lbl.gov
To find it, the DeepLook collaboration of oil-producing and service companies,
led by BP Exploration, has awarded four grants--out of more than 100 proposals
submitted--to develop new fluid-imaging techniques. One project, "Data Fusion
for the Prediction of Reservoir Fluids," is being jointly undertaken by
researchers from Berkeley Lab's Earth Sciences Division (ESD), Oak Ridge
National Laboratory, and NASA's Jet Propulsion Laboratory.
"Our goal is to produce an efficient and robust characterization of the
reservoir using soft computing techniques," says Masoud Nikravesh, an ESD
researcher who is also a member of the Berkeley Initiative in Soft Computing
(BISC) in UC Berkeley's Electrical Engineering and Computer Sciences
Department. Nikravesh and colleague Larry Myer represent Berkeley Lab in the
cooperative effort.
The data will come from huge corporate databases already in oil industry
archives. Well logs and core samples can reveal the mineral composition,
geologic structure, porosity, and fluid content of the rock underlying
individual wells. But the number of wells in a field is limited; boreholes
produce very narrow samples and may drill right past substantial deposits of
oil. Data from seismic studies is cheaper and covers a much bigger volume of
the subsurface, but can be notoriously difficult to resolve into interpretable
pictures.
When many kinds of information are compared and combined--"fused"--the result
can be so much data "that you can't find its structure just by looking at it,"
Nikravesh says. "It must also be mined. If you're mining gold you have to sift
through a lot of sand to get a little gold; we have to sift a lot of numbers to
get the real data."
The sifting tools of this "data mine" are techniques developed by Lotfi Zadeh
of BISC: fuzzy logic, neural networks, and other computational methods that, as
Nikravesh puts it, "exploit tolerance for imprecision, uncertainty, and partial
truth."
An important objective is to uncover rules for interpreting disparate but
complementary information from many different sources. Given dependable data
from the resulting "intelligent" software, well-understood principles of
physics can be used with confidence to construct a simulation of an oil
reservoir more accurate than any now in existence.
In return for their investment in this kind of research, the DeepLook
collaborators want practical answers: subsurface maps that pinpoint bypassed
oil in a reservoir with known uncertainty, plus accurate predictions of the
future performance of the field. They want programs that produce these maps and
predictions cheaply and quickly. Wells are expensive--they can cost half a
million dollars on dry land and a hundred times that in hostile environments or
in complex geologic settings.
While Masoud Nikravesh is confident that the Berkeley Lab/Oak Ridge/JPL joint
project will do the immediate job for the oil companies, the implications for
this kind of intelligent software go much farther. As an example, he cites
JPL's expertise in remote sensing, demonstrated by missions like the Mars
Pathfinder.
During the DeepLook project, Berkeley Lab scientists have worked closely with
NASA researchers Sandeep Gulati and Amir Fijani from the Ultracomputing
Technologies Group at JPL, and together they are preparing fundamentally
different techniques for characterizing geological structures. Says Nikravesh,
"If we can learn to work seriously with each other, applying our knowledge of
the Earth--and of soft computing--and their pioneering methods of imaging,
whole new worlds will open."
Search | Home | Questions |