- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Better Estimates of Prediction Uncertainty
Abstract: How can we quantify the accuracy and uncertainty of predictions that we make in online decision problems? Standard approaches, like asking for calibrated predictions or giving prediction intervals using conformal methods give marginal guarantees --- i.e. they offer promises that are averages over the history of data points. Guarantees like this are unsatisfying when the data points correspond to people, and the predictions are used in important contexts --- like personalized medicine.
In this work, we study how to give stronger than marginal ("multivalid") guarantees for estimates of means, moments, and prediction intervals. Guarantees like this are valid not just as averaged over the entire population, but also as averaged over an enormous number of potentially intersecting demographic groups. We leverage techniques from game theory to give efficient algorithms promising these guarantees even in adversarial environments.
Bio: Aaron is a Professor of Computer and Information Science at the University of Pennsylvania computer science department, associated with the theory group, PRiML (Penn Research in Machine Learning) the Warren Center for Network and Data Sciences, and am co-director of our program in Networked and Social Systems Engineering. He is also affiliated with the AMCS program (Applied Mathematics and Computational Science). Aaron spent a year as a postdoc at Microsoft Research New England. Before that, he received my PhD from Carnegie Mellon University, where he was fortunate to have been advised by Avrim Blum. His main interests are in algorithms and machine learning, and specifically in the areas of private data analysis, fairness in machine learning, game theory and mechanism design, and learning theory. Aaron is the recipient of a Presidential Early Career Award for Scientists and Engineers (PECASE), an Alfred P. Sloan Research Fellowship, an NSF CAREER award, a Google Faculty Research Award, an Amazon Research Award, and a Yahoo Academic Career Enhancement award. He is also an Amazon Scholar at Amazon Web Services (AWS). Previously, Aaron was involved in advisory and consulting work related to differential privacy, algorithmic fairness, and machine learning, including with Apple and Facebook. Aaron was also a scientific advisor for Leapyear and Spectrum Labs.