AI Seminar

Learning Invariant and Robust Representations

Honglak Lee
SHARE:

Learning invariant and robust representations is an important problem in machine learning and pattern recognition. In this talk, we will discuss two approaches towards tackling this problem. First, we present a novel framework of transformation invariant feature learning by incorporating linear transformations into the feature learning algorithms. For example, we present the transformation-invariant restricted Boltzmann machine that compactly represents data by its weights and their transformations, which achieves invariance of the feature representation via probabilistic max pooling. In addition, we show that our transformation-invariant feature learning framework can also be extended to other unsupervised learning methods, such as autoencoders or sparse coding. Second, we will present ongoing work on a robust representation learning algorithm that combines bottom-up unsupervised learning and top-down supervised learning in a unified way, which can effectively handle irrelevant input patterns by focusing on relevant signals and thus learn more informative features.

Sponsored by

Toyota