AI Lab logo
menu MENU

Theory Seminar

The Graphics Card As A Stream Computer

Suresh Venkatasubramanian
SHARE:

Massive data sets have radically changed our understanding of how to
design efficient algorithms; the streaming paradigm, whether it in terms
of number of passes of an external memory algorithm, or the single pass
and limited memory of a stream algorithm, appears to be the dominant
method for coping with large data.

A very different kind of massive computation has had the same effect at
the level of the CPU. The most prominent example is that of the
computations performed by a graphics card. The operations themselves are
very simple, and require very little memory, but require the ability to
perform many computations extremely fast and in parallel to whatever
degree possible. What has resulted is a stream processor that is highly
optimized for stream computations. An intriguing side effect of this is
the growing use of a graphics card as a general purpose stream processing
engine. In an ever-increasing array of applications, researchers are
discovering that performing a computation on a graphics card is far faster
than performing it on a CPU, and so are using a GPU as a stream
co-processor.
Suresh Venkatasubramanian works at AT&T Labs, where he has been since
1999. He did his Ph.D at Stanford University, and works mostly in
computational geometry, graphics, visualization, and in problems arising
from massive data sets. In a previous incarnation he also worked on
geometric problems arising in computational biology and chemistry.

Sponsored by

Martin Strauss