Faculty Candidate Seminar
Towards Generalizable Imitation in Robotics
Add to Google Calendar
Robotics and AI are experiencing radical growth, fueled by innovations in data-driven learning paradigms coupled with novel device design, in applications such as healthcare, manufacturing and service robotics. And in our quest for general purpose autonomy, we need abstractions and algorithms for efficient generalization.
Data-driven methods such as reinforcement learning circumvent hand-tuned feature engineering, albeit lack guarantees and often incur a massive computational expense: training these models frequently takes weeks in addition to months of task-specific data-collection on physical systems. Further such ab initio methods often do not scale to complex sequential tasks. In contrast, biological agents can often learn faster not only through self-supervision but also through imitation. My research aims to bridge this gap and enable generalizable imitation for robot autonomy. We need to build systems that can capture semantic task structures that promote sample efficiency and can generalize to new task instances across visual, dynamical or semantic variations. And this involves designing algorithms that unify in reinforcement learning, control theoretic planning, semantic scene & video understanding, and design.
In this talk, I will discuss two aspects of Generalizable Imitation: Task Imitation, and Generalization in both Visual and Kinematic spaces. First, I will describe how we can move away from hand-designed finite state machines by unsupervised structure learning for complex multi-step sequential tasks. Then I will discuss techniques for robust policy learning to handle generalization across unseen dynamics. I will revisit structure learning for task-level understanding for generalization to visual semantics.
And lastly, I will present a program synthesis based method for generalization across task semantics with a single example with unseen task structure, topology or length. The algorithms and techniques introduced are applicable across domains in robotics; in this talk, I will exemplify these ideas through my work on medical and personal robotics.
Animesh is a Postdoctoral Researcher at Stanford University AI lab. Animesh is interested in problems at the intersection of optimization, machine learning, and design. He studies the interaction of data-driven Learning for autonomy and Design for automation for human skill-augmentation and decision support. Animesh received his Ph.D. from the University of California, Berkeley where he was a part of the Berkeley AI Research center and the Automation Science Lab. His research has been recognized with Best Applications Paper Award at IEEE CASE, Best Video at Hamlyn Symposium on Surgical Robotics, and Best Paper Nomination at IEEE ICRA 2015. And his work has also featured in press outlets such as New York Times, UC Health, UC CITRIS News, and BBC Click.