Faculty Candidate Seminar

Variational Bayesian Methods for Unsupervised Latent Factor Models of Text and Audio

Matt HoffmanPostdoctoral ResearcherColumbia University
SHARE:

Postdoctoral Researcher, Columbia University
In this talk, I will discuss variational strategies for fitting two Bayesian models that explain high-dimensional media data in terms of sets of latent factors.

The first model, Latent Dirichlet Allocation (LDA), is a popular model of text corpora that learns to represent documents as mixtures of latent "topic" distributions. We develop an online variational Bayes (VB) algorithm for LDA. Online LDA is based on online stochastic optimization with a natural gradient step, which we show converges to a local optimum of the VB objective function. It can handily analyze massive document collections, including those arriving in a stream. We study the performance of online LDA in several ways, including by fitting a 100-topic topic model to 3.3M articles from Wikipedia in a single pass. We demonstrate that online LDA finds topic models as good as or better than those found with batch VB, and in a fraction of the time.

The second model, Gamma Process Nonnegative Matrix Factorization (GaP-NMF), is a new Bayesian nonparametric model of audio spectrograms that addresses the problem of latent source discovery and separation in audio recordings. GaP-NMF allows us to discover what sounds (e.g. bass drums, guitar chords, etc.) are present in a recording and to isolate or suppress individual sources. Crucially, this model is able to decide how many latent sources are necessary to model the data. This feature is particularly valuable in this application, since it is impossible to guess a priori how many sounds will appear in a given recording. Although the GaP-NMF model lacks the conditional conjugacy enjoyed by models such as LDA, we are nonetheless able to efficiently fit it to data using a novel variational algorithm.

Sponsored by

CSE