On Networks and Behavior: Strategic Inference and Machine Learning
Add to Google Calendar
Studying complex behavior in economic, social, or other similar systems is an important scientific endeavor with potentially direct impact to society via the eventual commercialization of relevant technology. The big-data revolution offers the opportunity to easily collect and process large amounts of data recording system behavior. Yet, our fundamental understanding of real-world complex systems remains slim at best.
In this talk, I will summarize research from my group that takes a modern AI, machine learning, and engineering approach to questions about systems in domains where global behavior results from complex local interactions of agents embedded in a network. Our particular interest is interactions resulting from the distributed reasoning and the deliberate decisions of a large number of agents (e.g., a social network).
I will present a general approach, which we like to call "causal strategic inference" (CSI), that we put forward to study complex strategic systems based on a noncooperative graphical game-theoretic model. I will focus on our work on questions of influence in social networks, and argue for a new definition of what it means to be "influential."
The first part of the talk will assume full knowledge of the game-theoretic model. But, how could we obtain such models? Given their size and complexity, knowledge engineering is impractical. Inspired by the CSI work, in the second part of the talk, I will present our proposed framework for learning games from strictly behavioral data. I will show how our framework reveals strong connections to fundamental concepts in machine learning such as the tradeoff between goodness-of-fit and model complexity, as well as the need for bias in the learning process.
In our work, we seek, and provide, algorithms that scale polynomially with the number of agents and thus can deal with relatively large systems (i.e., at least 100 agents). While I will present hardness results regarding some of our problems of interest, I will provide evidence of how we achieve our computational-efficiency objectives via our theoretical results involving mathematical characterizations, positive computational results, provable bounds on learning ability, and tractable algorithms for learning games. I will also illustrate our inference and learning approach in synthetic examples and two real-world domains: the U.S. Supreme Court and the U.S. Congress.
This is joint work with Mohammad Irfan (Bowdoin) and Jean Honorio (MIT).
Luis E. Ortiz is an assistant professor at Stony Brook University. Prior to joining Stony Brook, he was an assistant professor at the University of Puerto Rico, Mayagüez; a postdoctoral lecturer at MIT; a postdoctoral researcher at the University of Pennsylvania; and a consultant in the field of AI and ML at AT&T Labortories-Research. He received an Sc.M. degree and a PhD degree in computer science in 1998 and 2001, respectively, both from Brown University. He received a BS degree in computer science in 1995 from the University of Minnesota. His main research areas are AI and ML. His current focus is on computational game theory and economics, with applications to the study of influence in strategic, networked, large-population settings, and learning game-theoretic models from data on strategic behavior. Other interests include, game-theoretic models for interdependent security, algorithms for computing equilibria in games, connections to probabilistic graphical models, and AdaBoost. Prof. Ortiz received the NSF CAREER award in 2011. He was a National Physical Science Consortium (NPSC) Ph.D. Fellow and an NSF Minority Graduate Fellow.