- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Pathwise Conditioning and Non-Euclidean Gaussian Processes (via Zoom)
Abstract: In Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Gaussian processes. We then present a class of Gaussian process models on graphs and manifolds, which can enable one to perform Bayesian optimization while taking into account symmetries and constraints in an intrinsic manner.
Bio: Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.