AI Seminar

A Defense of (Empirical) Neural Tangent Kernels

Danica SutherlandAssistant ProfessorUniversity of British Columbia
WHERE:
3725 Beyster Building
SHARE:

Abstract

Neural tangent kernels, recently the hottest thing in deep learning theory, seem not to describe the full training process of practical neural networks very accurately. Even so, evidence is mounting that finite “empirical” neural tangent kernels can be highly effective for understanding real networks, especially “locally.” We discuss their use as a tool in understanding phenomena that help us guide the training process, and for practical purposes in approximate “look-ahead” criteria for active learning. We also present a new, theoretically-justified approximation to the empirical NTK which can save several orders of magnitude of computational cost, without substantially harming accuracy.

Bio

Danica Sutherland is an Assistant Professor in computer science at the University of British Columbia, and a Canada CIFAR AI Chair at Amii. She did her PhD at Carnegie Mellon University, a postdoc at University College London’s Gatsby unit, and was a research assistant professor at TTI-Chicago. Her research focuses on representation learning (particularly in forms that integrate kernel methods with deep learning), statistical learning theory, and understanding differences between probability distributions.

Zoom

https://umich.zoom.us/j/92216884113 (password: UMichAI)

Organizer

AI Lab

Faculty Host

Wei Hu