AI Lab logo
menu MENU

Faculty Candidate Seminar

Learning to Walk by Actuating a Passive Dynamic Walker

Dr. Russell Tedraks
SHARE:

Dr. Tedrake is from MIT
Bipedal robots today can walk across a flat factory floor, and even up
stairs, but they cannot compete with humans in terms of speed,
efficiency, or robustness. The primary challenge in the design of
walking controllers is solving the "delayed-reward problem" – e.g.,
understanding the actions required at toe-off to achieve the desired
motion through the swing phase. Using an optimal control /
reinforcement learning framework, we can improve the performance of
these walking systems at design time, and build learning systems that
allow the controller to adapt and improve as the robot walks. There are, however, a class of walking robots
known as "passive dynamic walkers" which are capable of stable walking
down a small decline without using any motors. The
learning works quickly enough that we can acquire a robust controller
from a "blank slate" the robot begins walking within a minute and
learning converges in approximately 20 minutes. After acquiring the
basic controller, the learning system is able to quickly adapt to
small changes in the environment and adjust to the terrain as it
walks.

Sponsored by

CSE