- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Spring 2025 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University / Cornell Tech - High School Programming Workshop and Contest 2025
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Title: Mitigating Hallucinations in Large Language Models via Conformal Prediction
Abstract: Conformal prediction has recently emerged as an effective technique for quantifying uncertainty of deep neural networks. It modifies the neural network to output sets of labels that are guaranteed to contain the true label with high probability under standard assumptions. In this talk, I will describe our recent work applying conformal prediction to improve trustworthiness of large language models (LLMs). First, I will describe how we adapt conformal prediction for LLMs for code generation, where the key challenge is constructing reasonably sized prediction sets. Second, I will describe how we apply conformal prediction to ensure trustworthiness of retrieval augmented question answering. Our results demonstrate how conformal prediction can be a valuable tool for avoiding issues such as hallucinations that plague LLMs.
Bio: Osbert Bastani is an assistant professor at the Department of Computer and Information Science at the University of Pennsylvania. He is broadly interested in techniques for designing trustworthy machine learning systems. Previously, he completed his Ph.D. in computer science from Stanford and his A.B. in mathematics from Harvard.