Faculty Candidate Seminar

Abstraction in Reinforcement Learning

David AbelPh.D. CandidateBrown University
WHERE:
3725 Beyster BuildingMap
SHARE:
Reinforcement Learning (RL) agents must learn to make useful decisions through action and observation alone. To be effective problem solvers, RL agents must efficiently explore vast worlds, assign credit from delayed feedback, and generalize to new experiences, all while making use of limited data, computational resources, and perceptual bandwidth.

In this talk, I discuss the role that abstraction can play in overcoming these fundamental challenges of RL. I first introduce classes of state abstraction that induce a trade-off between optimality and the size of an agent’s resulting abstract model, yielding a practical algorithm for learning useful and compact representations from an expert. Moreover, I show how these learned, simple representations can underlie efficient learning in complex environments. Second, I analyze the problem of searching for abstract actions that make planning more efficient. I present new computational complexity results that prove it is NP-hard to find the set of abstract actions that minimize planning time, but show this set can be approximated in polynomial time. Collectively, these results establish a principled foundation for discovering abstractions that minimize the difficulty of high quality learning and decision making.

Bio: David Abel is a Ph.D candidate at Brown University advised by Michael Littman and a former research intern at DeepMind London, the University of Oxford, and Microsoft Research in New York City. His research focuses broadly on the theory of reinforcement learning with occasional ventures into the philosophy of AI and computational sustainability.

Organizer

Cindy Estell

Faculty Host

Satinder Baveja