Faculty Candidate Seminar
Understanding Daily Activities in Egocentric Video
Add to Google Calendar
Recent advances in camera technology have made it possible to build a comfortable, wearable system which can capture the scene in front of the user as they go about their daily life. Products based on this technology, such as GoPro and Google Glass, have generated substantial interest. In this talk, I will present my work on egocentric vision, which leverages wearable camera technology and provides a new line of attack on classical computer vision problems such as object categorization and activity recognition. I will demonstrate that contextual cues and the actions of a user can be exploited in an egocentric vision system to learn models of objects under very weak supervision. In addition, I will show that measurements of a subject's gaze during object manipulation tasks can provide novel feature representations to support activity recognition. Moving beyond surface-level categorization, I will showcase a method for automatically discovering object state changes during actions, and an approach to building descriptive models of social interactions between groups of individuals. These new capabilities for egocentric video analysis will enable new applications in life logging, elder care, human-robot interaction, developmental screening, augmented reality and social media.
Alireza Fathi is a Ph.D. candidate in the College of Computing at the Georgia Institute of Technology, working with James M. Rehg. He received his Bachelor's degree from Sharif University of Technology in Iran in 2006. In 2008, he received his MSc degree from Simon Fraser University in Canada. His main research areas are computer vision and machine learning, with a particular interest in egocentric (first-person) vision. He has published several papers at the top vision conferences, including CVPR, ICCV and ECCV, on recognizing objects and activities in first-person view videos. He was a co-organizer of the 2nd IEEE Workshop on Egocentric Vision, which was held in conjunction with CVPR 2012.