AI Seminar
Task Planning with Large Language Models
Add to Google Calendar
Meeting ID: 979 4349 4628
Passcode: aiseminar
Abstract:
Task planning is an important capability for artificial agents to efficiently explore their environments and complete tasks. In this talk, I will discuss how Large Language Models (LLMs) can be harnessed to generate accurate task plans. In the first part of the talk, I will show that LLMs possess strong priors for planning. Given that LLMs are designed to process discrete tokens, grounding such plans in visual environments poses a challenge. I will present a simple mechanism for directly integrating visual observations into LLMs for grounded planning that outperforms prior approaches which indirectly incorporate observations via captions and affordance. In the second part of the talk, I will delve into further enhancing LLM planners with knowledge about causal dependencies between intermediate steps. I will present an unsupervised method for generating task graphs which expose the dependency structure between different steps. I will further discuss how program representations can be leveraged to reason about preconditions in a principled manner and demonstrate that knowledge of preconditions help build better LLM agents. I will conclude with a discussion of challenges and recent trends in task planning with LLMs.
About the speaker:
Lajanugen is a Research Scientist at LG AI Research. His research interests lie in Machine Learning and Natural Language Processing. He is particularly interested in learning from limited supervision, with applications to task planning, reasoning and language grounding. He has published papers in top venues such as ICML, Neurips, ICLR, AAAI, NAACL and ACL. His work on zero-shot entity linking received a best paper nomination at ACL 2019. He received his PhD from the University of Michigan. During his PhD, he has spent time at Google Brain, Google AI and Facebook AI as a research intern.