- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Program analysis tools help developers ensure the safety and robustness of software systems by automatically reasoning about the program. An important barrier to adoption is that these "automatic" tools oftentimes require costly inputs from the human using the analysis. For example, the user must annotate missing code (e.g., dynamically loaded or binary code) and additionally provide a specification encoding desired program behaviors. The focus of this research is to develop techniques that minimize this manual effort using techniques from machine learning. First, in the verification setting (where the goal is to prove a given correctness property), I describe an algorithm that interacts with the user to identify all relevant annotations for missing code, and show that empirically, the required manual effort is substantially reduced. Second, in the bug-finding setting, I describe an algorithm that improves the ability of random testing by automatically inferring the program input language, and show that it generates much higher quality valid test cases compared to a baseline.
Bio:
Osbert Bastani is a Ph.D. student in Computer Science at Stanford University advised by Alex Aiken. He is interested in improving the automation of program analysis tools using techniques from machine learning, artificial intelligence, and program synthesis. His work is motivated by applications to building secure systems, and analyzing software systems that rely on machine learning models.