- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Efficiently Learning Markov Random Fields from Dynamics
Abstract: An important task in high-dimensional statistics is learning the parameters or dependency structure of undirected graphical models, or Markov random fields (MRF), which have applications across computer science, machine learning, and statistics. Prior work on this problem assumes access to i.i.d. samples from the MRF distribution and state-of-the-art algorithms succeed with n^{\Theta(k)} runtime, where n is the dimension and k is the order of the interactions. However, well-known computational lower bounds imply that given i.i.d. samples from a sparse, order-k MRF, any learning algorithm likely requires n^{\Omega(k)} time. Moreover, in many economic, network, or statistical physics applications, even obtaining i.i.d. samples from the MRF distribution is unrealistic or itself computationally hard.
In this talk, we show that these fundamental barriers for learning MRFs can surprisingly be completely circumvented when learning from natural, dynamical observations. We prove that in bounded-degree MRFs, the dependency structure and parameters can be recovered using a trajectory of Glauber dynamics of length O(n \log n) with runtime O(n^2 \log n). The implicit constants depend only on the degree and non-degeneracy parameters of the model, but not the dimension n. Our results further evince the possibility of leveraging correlations across data to obtain provably better learning algorithms than are attainable from i.i.d. samples in important statistical settings.
Based on joint works with Ankur Moitra and Elchanan Mossel.
Bio: Jason is a Postdoctoral Associate in the Department of Mathematics at MIT, where he works with Elchanan Mossel. Previously, he received his PhD from the Department of Computer Science at Cornell University, where he was advised by Éva Tardos. He graduated from Yale University with degrees in Mathematics (B.S., with distinction) and Economics (B.A.).
He is broadly interested in theoretical computer science. In particular, his research focuses on theoretical problems at the intersection of algorithms, learning, game theory, networks, and randomness.