Faculty Candidate Seminar
Perception for Robust, Autonomous Robotic Manipulation – A Question of Balancing Prior Knowledge and Learning From Data
Add to Google Calendar
Given a stream of raw, multi-modal sensory input data, an autonomous robot has to continuously make decisions on how to act for achieving a specific task. This requires the robot to map a very high-dimensional space (sensory data) to another high-dimensional space (motor commands). Finding this non-linear relationship only becomes tractable if we introduce suitable biases and task-specific prior knowledge that structures this mapping. These biases have to provide enough flexibility to cope with the expected variability in the robot task. However, increased model flexibility comes at a price: more open parameters which either need to be manually tuned or learned from a suitable amount of data.
In this talk, I illustrate this trade-off by analyzing two problems in the area of perception for autonomous robotic grasping and manipulation: (i) learning to grasp objects given only partial and noisy sensory data and (ii) visual object tracking. I present different approaches towards each of these problems. They are located at different ends of the spectrum between the amount of prior knowledge that is incorporated in the model and the number of open parameters that are learned from data. Based on these examples, I conclude my talk by discussing the different possibilities to include biases and prior knowledge in a model and how to choose a suitable, task-specific trade-off with respect to the number of remaining open parameters.
Jeannette Bohg is a senior research scientist at the Autonomous Motion Department, Max Planck Institute for Intelligent Systems in T¼bingen, Germany. She holds a Diploma in Computer Science from the Technical University Dresden, Germany and a Master's degree in Applied Information Technology from Chalmers in Gothenburg, Sweden. In 2012, she received her PhD from the Royal Institute of Technology (KTH) in Stockholm, Sweden.
Her research interest lies at the intersection between robotic manipulation, Computer Vision and Machine Learning. Specifically, she analyses how continuous, multi-modal sensory feedback can be incorporated by a robot to achieve robust and dexterous manipulation capabilities in the presence of uncertainty and a dynamically changing environment.