Dissertation Defense

Sample efficient learning and generalization with text representations

Lajanugen Logeswaran
WHERE:
Remote/Virtual
SHARE:

Virtual

Abstract: Humans have a remarkable ability to learn without much supervision. Often, a few examples or demonstrations are enough for us to learn a new concept. On the other hand, machine learning models need large amounts of labeled training data. But in many practical situations, such supervision is not available and collecting labeled instances may be expensive or infeasible due to privacy reasons. Hence, there is a practical need for approaches that can learn new tasks or adapt to new domains without significant supervision.

In this thesis, I address the limited supervision problem from two perspectives. First, I propose methods that exploit large amounts of unlabelled data to learn useful feature representations in a self-supervised manner. Such representations capture rich prior knowledge about the data, allowing them to be useful across many tasks, and enable data-efficient learning of new tasks. In particular, I will present an approach to efficiently derive such representations. Computational efficiency is critical in exploiting unlabelled data available in abundance.

Second, I explore models and algorithms with few/zero-shot learning capabilities. My work demonstrates that ‘generalization via reading’ is an effective paradigm where models use text descriptions in order to learn and generalize to new domains/tasks in a zero-shot manner. I will present models for zero-shot entity understanding and show that generalization to unseen entities in unseen domains is possible by assuming that text descriptions of entities are available. Beyond language understanding, I will also talk about how these ideas are applicable to problems in other data modalities such as embodied agents that learn from text instructions.

Organizer

Sonya Siddique

Faculty Host

Professor Honglak Lee