AI Seminar

Mind and Data: Learning to Predict Long-term Consequences Efficiently

Rich SuttonProfessorUniversity of Alberta
SHARE:

For some time now I have been exploring the idea that Artificial Intelligence can be viewed as a Big Data problem in the sense that it involves continually processing large amounts of sensorimotor data in real time, and that what is learned from the data is usefully characterized as predictions about future data. This perspective is appealing because it is reduces the abstract ideas of knowledge and truth to the clearer ideas of prediction and predictive accuracy, and because it enables learning from data without human intervention. Nevertheless, it is a radical idea, and it is not immediately clear how to make progress pursuing it.

A good example of simple predictive knowledge is that people and other animals continually make and learn many predictions about their sensory input stream, a phenomena called "nexting" and "Pavlovian conditioning" by psychologists. In my laboratory we have recently built a robot capable of nexting: every tenth of a second it makes and learns 6000 long-term predictions about its sensors, each a function of 6000 sensory features. To do this is computationally challenging and taxes the abilities of modern laptop computers. I argue that it also strongly constrains the learning algorithms: linear computational complexity is critical for scaling to large numbers of features, and temporal-difference learning is critical to handling long-term predictions efficiently. This then is the strategy we pursue for making progress on the Big Data view of AI: we focus on the search for the few special algorithms that can meet the demanding computational constraints of learning long-term predictions efficiently.
Richard Sutton is a professor and iCORE chair in the department of computing science at the University of Alberta. He is a fellow of the Association for the Advancement of Artificial Intelligence and co-author of the textbook Reinforcement Learning: An Introduction from MIT Press. Before joining the University of Alberta in 2003, he worked in industry at AT&T and GTE Labs, and in academia at the University of Massachusetts. He received a PhD in computer science from the University of Massachusetts in 1984 and a BA in psychology from Stanford University in 1978. Rich's research interests center on the learning problems facing a decision-maker interacting with its environment, which he sees as central to artificial intelligence. He is also interested in animal learning psychology, in connectionist networks, and generally in systems that continually improve their representations and models of the world.

Sponsored by

Toyota