- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Strategic Ranking (via Zoom)
Abstract: Many consequential decisions (e.g. college admissions) are based on relative, not absolute, measure of quality. While the literature on strategic classification has studied the design of a classifier robust to the manipulation by strategic individuals, existing work has yet to consider the effect of competition among strategic individuals as induced by the algorithm design. Motivated by constrained allocation settings such as college admissions, we introduce strategic ranking, in which the (designed) individual reward depends on an applicant's post-effort rank in a measurement of interest. Our results illustrate how competition among applicants affects the resulting equilibria and model insights. We analyze how various ranking reward designs, belonging to a family of step functions, trade off applicant, school, and societal utility, as well as how ranking design counters inequities arising from disparate access to resources. In particular, we find that randomization in the reward design can mitigate two measures of disparate impact, welfare gap and access. This talk is based on joint work with Nikhil Garg and Christian Borgs.
Bio: Lydia T. Liu is a postdoctoral researcher at Cornell University Computer Science, working with Jon Kleinberg, Karen Levy, and Solon Barocas in the Artificial Intelligence, Policy, and Practice (AIPP) initiative. Her research examines the theoretical foundations of machine learning and algorithmic decision-making, with a focus on societal impact and human welfare. She obtained her PhD in Electrical Engineering and Computer Sciences from UC Berkeley, advised by Moritz Hardt and Michael Jordan, and has received a Best Paper Award at the International Conference on Machine Learning, a Microsoft Ada Lovelace Fellowship, an Open Philanthropy AI Fellowship.