AI Lab logo
menu MENU

CSE Seminar

Programming Abstractions for Large-Scale Parallel Machines

Amir KamilPost DocLawrence Berkeley National Laboratory

In the last decade, performance gains in sequential programming have
stagnated due to power constraints. As a result, parallel computing
has become the primary tool for increasing performance on modern
machines. In this talk, I will present a brief introduction to
parallel programming and some of the challenges it raises for
programmers. In particular, concurrent execution must be carefully
managed to avoid aberrant behavior, and data locality and
communication costs must be taken into account in order to achieve
good performance. One of the primary means of addressing both
concurrency and locality on large-scale machines is to use the single
program, multiple data (SPMD) model of parallelism, which combines a
fixed set of independent threads of execution with global collective
communication and synchronization operations. While SPMD has
advantages of scalability and simplicity, it presents a flat model of
parallelism that does not fit well with hierarchical algorithms and
machines. In this talk, I will also present an extension of the SPMD
model that enables hierarchical algorithms to be easily expressed
while obtaining better performance on hierarchical machines. Finally,
I will briefly discuss other work on improving productivity and
performance of large-scale parallel programs.
Amir Kamil is a postdoctoral fellow in the Computer Languages and
System Software group at Lawrence Berkeley National Laboratory (LBNL).
He began teaching as an undergraduate at UC Berkeley, holding the
position of Undergraduate Student Instructor for five semesters. He
continued as a PhD student at Berkeley, where his research focused on
compiler analysis, optimization, and programming models for
large-scale parallel programs. While a PhD student, Amir taught twice
as a Graduate Student Instructor and was also the primary instructor
for the Discrete Mathematics and Probability course in the summer of
2011. After completing his PhD in 2012, Amir taught the introductory
Computer Science course at Berkeley as a Lecturer in Spring 2013.
Since then, he has been working at LBNL, continuing his research on
programming models, optimization, and tools for large-scale parallel

Sponsored by