- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Spring 2025 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University / Cornell Tech - High School Programming Workshop and Contest 2025
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
An Equivalence Between Private Classification and Online Prediction (via Zoom)
Abstract: Differential privacy enables rich statistical analyses on data while provably protecting individual-level privacy. The last 15 years of research has shown that, at least in principle, a number of fundamental statistical tasks are compatible with differential privacy. However, privacy-preserving analyses often require additional complexity over their non-private counterparts, for instance, in terms of the number of data samples one needs to collect in order to get accurate results. In fact, some infinite concept classes that are "easy" to learn in standard computational learning theory become impossible to learn under differential privacy using any finite number of samples.
Alon, Livni, Malliaris, and Moran recently showed that finite Littlestone dimension is a necessary condition for a concept class to be privately learnable, and conjectured that it is sufficient as well. Here, Littlestone dimension characterizes learnability in the mistake-bound model of online learning. We prove their conjecture, establishing a qualitative equivalence between private learnability and online learnability. Our result goes by way of a new connection between privacy and algorithmic stability for learning.
Joint work with Roi Livni and Shay Moran.
Bio: Mark Bun focuses on theoretical computer science, including data privacy, computational complexity, cryptography, and the foundations of machine learning. He uses polynomials (continuous functions) to investigate fundamental properties of Boolean (discrete) functions and has also developed new algorithms and lower bound techniques for differentially private data analysis. He spent the 2018–2019 academic year at the Simons Institute for the Theory of Computing at UC Berkeley and joined BU as a tenure-track assistant professor in July 2019.