AI Seminar

Determinantal Point Processes: Representation, Inference, and Learning

Alex Kulesza
SHARE:

Many tasks in fields like computer vision and natural language processing rely on models over complex, structured output spaces. However, standard graphical models are computationally tractable only in certain restricted settings, and inference approximations often fail in the presence of global, negative interactions between variables. Determinantal point processes (DPPs) offer a promising and complementary approach.
I will discuss how DPPs, which arise in quantum physics and random matrix theory, can be used as probabilistic models for real-world subset selection problems where diversity is desired. For example, DPPs can be used to choose diverse sets of high-quality search results, to build informative summaries by selecting diverse sentences from documents, or to model non-overlapping human poses in images or video. Remarkably, DPPs capture these global, negative correlations while offering tractable exact inference algorithms for normalization, calculating marginals, computing conditional probabilities, and sampling.
I will present our recent work developing new extensions and algorithms for DPPs, including a novel factorization and dual representation that together enable efficient inference for DPPs over exponentially large sets of structures. I will discuss how recursions for the elementary symmetric polynomials allow us to condition a DPP on the size of the selected subset, and how we can learn the parameters of a DPP from labeled data. Throughout, I will show experimental results on real-world tasks like document summarization, multiple human pose estimation, image search diversification, and threading of large collections.

Sponsored by

Toyota